Skip to main content

8 posts tagged with "winglang"

View All Tags

ยท 9 min read
Shai Ber

Why should we invest in developing a new programming language (for humans) today, when AI is rapidly advancing and taking over more coding tasks?

I often encounter this question in various forms:

  1. Won't AI eventually write machine code directly, rendering programming languages obsolete?
  2. Can a new language introduce features or capabilities that AI cannot achieve using existing languages? (e.g., why create a cloud-portable language when AI can write code for a specific cloud and then rewrite it for another?).
  3. Is it worthwhile to create tools for developers who might soon be replaced by AI?

Firstly, I must admit that I cannot predict the pace of AI advancement. Reputable experts hold differing opinions on when, or if, AI will replace human developers.

However, even if AI does eventually replace human developers, it may not necessarily write machine code directly. There's no need to burden AI with larger, more complex tasks when smaller, simpler ones can yield faster, higher-quality results. Thus, it could be more practical for AI to rely on proven abstraction layers and compilers, allowing it to efficiently focus on the unique aspects of the business it serves rather than reinventing the wheel for each app.

Having covered the more distant future, I now want to focus on the more immediate future in the remainder of this post.

I believe that, given human limitations and psychology, change will likely be gradual despite AI's rapid progress, leading to a significant transitional period with humans remaining in the loop. For instance, it's hard to imagine organizations not desiring a human to be accountable for the AI's output. In cases where things go awry and the AI cannot automatically resolve the issue, that human will probably want the ability to dive into the code.

Additionally, while it is true that AI is an equalizer between tools to some degree, it still doesn't completely solve the problem. Let's take the cloud portability example from above: even if the AI can port my code between clouds, I still want to be able to read and modify it. As a result, I must become an expert in all these clouds at the level of abstraction the AI used. If a new language allows it to write at a higher level of abstraction, it will be easier for me to understand and modify it too.

Therefore, I believe that for the foreseeable future there is room for tools that make it easier for both humans and AI to write quality code swiftly, collaborate effectively, and test more rapidly. Such tools will allow us to enhance the quality and speed of our application delivery.

The Key: Reducing Cognitive Load and Accelerating Iterationโ€‹

Whether you're an AI or a human developer, reducing cognitive load and iterating faster will result in better applications developed more quickly.

So, what can be done to make these improvements?

Working at a Higher Level of Abstractionโ€‹

Utilizing a higher level of abstraction offers the following benefits for both human and AI coders:

  1. Reduces cognitive load for human developers by focusing on the app's business logic instead of implementation details. This enables developers to concentrate on a smaller problem (e.g., instructing a car to turn right, rather than teaching it how to do so), deal with fewer levels of the stack, write less code, and minimize the surface area for errors.
  2. Reduces cognitive load for AI. This concept may need further clarification. AI systems come pre-trained with knowledge of all levels of the stack, so knowing less is not a significant advantage. Focusing on a smaller problem is also not a substantial benefit because, as long as the AI knows how to instruct the car to turn, it shouldn't have an issue teaching it how to do so instead of just telling it to turn. However, allowing the AI to write less code and reducing the chance for it to make mistakes is highly beneficial, as AI is far from infallible. Anyone who has witnessed it hallucinate interfaces or generate disconnected code can attest to this. Furthermore, AI is constrained by the amount of code it can generate before losing context. So writing less code enables AI coders to create larger and more complex parts of applications.
  3. Accelerates iteration speed because it requires writing less code, reducing the time it takes to write and maintain it. While it might not seem intuitive, this is equally important for both human and AI coders, as AI generates code one token at a time, similar to how a human writes.
  4. Improves collaboration between human and AI coders. A smaller code base written at a higher level of abstraction allows human developers to understand, modify and maintain AI-generated code more quickly and easily, resulting in higher quality code that is developed faster.

Faster Deployment and Testingโ€‹

Currently, deploying and testing cloud applications can take several minutes. Multiply this by numerous iteration cycles, and there's significant room for improvement.

Running tests locally is also challenging, as it requires mocking the cloud environment around the tested component.

Moreover, it's impossible to use the same tests locally and in the cloud.

By writing tests that can run both locally and in the cloud, and executing them quickly, we can vastly improve iteration speeds, regardless of whether the code is written by an AI, a human, or a collaboration between them.

So, how can we make this happen?

Introducing Winglangโ€‹

Winglang is a new programming language for cloud development that enables both human and AI developers to write cloud code at a higher level of abstraction, and comes with a local simulator that lets them test it super quickly.

Quantifying the Improvementโ€‹

We're talking about a 90%-95% reduction in code and a 100X increase in testing speeds.

Let's See Some Codeโ€‹

Here's an example of a small app that uploads a file to a bucket using a cloud function.

This is the code in Wing:

bring cloud;

let bucket = new cloud.Bucket();

new cloud.Function(inflight () => {
bucket.put("hello.txt", "world!");
});

As you can see, either a human or an AI coder that writes Wing code is working at a high level of abstraction, letting the Wing compiler take care of the underlying cloud mechanics, such as IAM policies and networking (don't worry, it is customizable and extensible, so you don't lose control when needed).

Unlike human and AI coders, the compiler cannot make mistakes. It is also faster, deterministic and doesn't lose context after a while. So the more work we delegate to it over either human or even AI the better.

By the way, the code can be compiled to any cloud provider, and its output is Terraform and JavaScript, which can be deployed with existing tools.

Now let's take a look at the same code in the leading cloud development stack today - Terraform + JavaScript.

main.tf:

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}

provider "aws" {
region = "us-west-2"
}

locals {
lambda_function_name = "upload_hello_txt_lambda"
}

resource "aws_s3_bucket" "this" {
bucket = "my-s3-bucket"
acl = "private"
}

data "archive_file" "lambda_zip" {
type = "zip"
source_file = "index.js"
output_path = "${path.module}/lambda.zip"
}

resource "aws_lambda_function" "this" {
function_name = local.lambda_function_name
role = aws_iam_role.lambda_role.arn
handler = "index.handler"
runtime = "nodejs14.x"
filename = data.archive_file.lambda_zip.output_path
timeout = 10

environment {
variables = {
BUCKET_NAME = aws_s3_bucket.this.bucket
}
}
}

resource "aws_iam_role" "lambda_role" {
name = "lambda_role"

assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "lambda.amazonaws.com"
}
}
]
})
}

resource "aws_iam_role_policy" "lambda_policy" {
name = "lambda_policy"
role = aws_iam_role.lambda_role.id

policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
Effect = "Allow"
Resource = "arn:aws:logs:*:*:*"
},
{
Action = [
"s3:PutObject"
]
Effect = "Allow"
Resource = "${aws_s3_bucket.this.arn}/*"
}
]
})
}

output "bucket_name" {
value = aws_s3_bucket.this.bucket
}

output "lambda_function_name" {
value = aws_lambda_function.this.function_name
}

index.js:

const AWS = require('aws-sdk');
const S3 = new AWS.S3();

exports.handler = async (event) => {
const bucketName = process.env.BUCKET_NAME;
const key = 'hello.txt';
const content = 'Hello world!';

const params = {
Bucket: bucketName,
Key: key,
Body: content,
};

try {
await S3.putObject(params).promise();
return {
statusCode: 200,
body: JSON.stringify('File uploaded successfully.'),
};
} catch (error) {
console.error(error);
return {
statusCode: 500,
body: JSON.stringify('Error uploading the file.'),
};
}
};

As you can see, we have to write 17X more code and dive deeply into lower layers of the cloud stack.

You might be wondering if there are newer solutions against which Wing's gains are less significant, or if the same results can be achieved through a library or a language extension. You can see how Wing compares to other solutions and why it's a new language rather than some another solution here.

Testing with Wingโ€‹

Wing comes out of the box with a local simulator and a visualization and debugging console.

These tools enable developers to work on their code with near-instant hot-reloading and test cloud applications very easily without having to mock the cloud around them.

This is a short video of the experience.

You can play with it yourself with zero friction in the Wing Playground.

Conclusionโ€‹

Although Wing introduces significant improvements in cloud development, we understand that migrating to a new language is a substantial undertaking that may be hard to justify in many cases.

Weโ€™ve gone to great lengths to make adopting the language as easy as possible with the following features:

  • Easy to learn because it is similar to other languages.
  • Works seamlessly with your existing stack and tools (especially deployment and management).
  • Mature ecosystem - import any NPM module or Terraform resource into your code.
  • Integrates into existing code bases - write runtime code in other languages and reference it with Wing.

Furthermore, we believe that in the era of AI, adopting a new language like Winglang is easier for humans as AI assists in writing code in unfamiliar languages and frameworks and simplifies the migration of existing code to new languages.

As we move toward a future where AI plays a more significant role in code development, the creation and adoption of languages like Winglang will ensure better collaboration, faster development, and higher-quality applications for both human and AI developers.

To get a glimpse of the future and experience writing code in Wing and testing it instantly, you can visit our playground.

ยท 13 min read
Chris Rybicki

Hey everyone!

We're delighted to share with you the third ever issue of the Wing Inflight Magazine (here's a link to our last issue if you missed it).

The Inflight Magazine is where you can stay up to date with Wing Programming Language developments, community events, and all things Wing.

Have comments or questions about Wing? Hit us up at @winglangio on Twitter or leave a message in our Slack!

This week's cloud computing joke...

Q: Why do serverless developers make terrible comedians?

A: Because their jokes always end up with a cold start!

ยท 9 min read
Elad Ben-Israel
Revital Barletz

Hi there!

We're excited to share with you the second issue of the Wing Inflight Magazine (here's a link to our first issue if you missed it).

The Inflight Magazine is where you can stay up to date with Wing Programming Language developments and the awesome community that is forming around it.

You received this email because we have you on our list as someone who might be interested to stay informed about the Wing Programming Language.

As always, we would love to hear what you think. Feel free to reply directly to this email, mention @winglangio on Twitter or ping us on slack.

ยท 10 min read
Hasan Abu-Rayyan

Okay, so you have decided to write your amazing application in Wing. You have enjoyed the benefits of Wing's cloud-oriented programming model and high level abstractions. Everything works great, the queues are queuing, the functions are functional, and the buckets are filling. You are ready to hand this application off to the ops team for deployment when suddenly, you are told there is a problem: the infrastructure doesn't comply with your organizations cloud excellence requirements.

Susan, who is an underappreciated, and sleep deprived platform engineer, tells you that your taggable infra resources must adhere to a rigorous tagging convention. She continues to tell you that all buckets must have versioning and replication enabled. You also gather that she was probably going out for drinks later, since she kept going on about her security group and ciders.

Before you take to Twitter and post a long thread about how Wing is not enterprise ready, you recall the tech lead (we will call him Greg) who gave a presentation about Wing at your organization's last Cloud Center of Excellence (CCoE) meeting. Greg assured everyone that they would be able to use Wing and only focus on the functional aspects of their cloud applications. He said this would be made possible by leveraging the organization's custom Wing plugins. So now all that remains is, to figure out what a Wing plugin is.

Welcome to the Wing Plugin Systemโ€‹

The Wing SDK is hard at work abstracting away the non-functional concerns of your cloud application. Which is great, now you can focus on the business logic of your application and not even care about what cloud this code will run on. However, these abstractions only solve a piece of the puzzle that is the cloud compiler. Inevitably with any production grade deployment, we will need a way to customize the compilation output to meet business requirements. Whether they be security, compliance, or cost optimizations these scenarios will require drilling down bellow the abstractions and into the compiler.

This is where the Wing plugin system comes in as the first steps to opening up hooks into the Wing compilation process. By using these plugin hooks Wing is still able to decouple the functional and non-functional concerns of our applications. Think of it as the SDK handles all functional concerns such as queues, functions, and buckets, while the plugin system handles the non-functional concerns such as encryption, versioning, and security groups.

The plugin system is boosting the Wing toolchain into the next level of cloud development. Unlocking the ability for teams to solve complex real world problems in Wing without compromising their organizations cloud principles. Actually, the plugin system enables organizations to double down and enforce their cloud principles without slowing down innovation.

But Why?...โ€‹

I think the "why" is important to talk about for a moment. Why should developers not care about the non-functional requirements of their application when writing code? The answer in my opinion is not that developers should not care about it, it's that they don't want to care about it in most cases. Developers want to focus on innovations and pushing boundaries, not be shackled by the low level details of the cloud. This has been true through the history of software development, that we build abstraction layers on-top of implementation details. Most developers don't want to understand the inner workings of file systems and how they differ between operating systems. We just want to be able to read and write files, thus we have file system abstractions. Then if we need to handle special cases based on operating systems, or CPU architectures we expect the abstraction to give us a way to do that, without rewriting our entire code base.

Though I think it's worth noting that the purpose of the abstraction is not to hide implementation details and make cloud application development more vague, but rather unlock new mental models that drive innovation.

"Being abstract is something profoundly different from being vague โ€ฆ The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise" - Edsger Dijkstra

This is the "why" for our plugin system in Wing, it's intended to be a mechanism that supports teams to be more precise in their cloud applications. If the SDK unlocks this semantic level of thought, then the plugin system protects it.

The Basics of Compiler Pluginsโ€‹

The plugin system is a simple and powerful way to customize the compilation output of your Wing application. It's comprised of a series of hooks that are called at various stages of the compilation process. In our initial release of the plugin system we have made available 3 hooks: preSynth, postSynth and validate. There are additional hooks that are currently in the think tank, but we will save those for another blog post and focus on what we have available today.

To write a plugin all you need is to implement a JavaScript file, in which you can export one or all of the compiler hooks. Once your plugin is written, you can use the --plugin flag in the wing cli to include it in the compilation process.

wing compile -t tf-aws my-app.w --plugin my-plugin.js

The preSynth hook is executed after the construct tree has been initialized but before the code has been synthesized to produce deployment artifacts. In which our plugins have the opportunity to add and mutate resources in the construct tree.

The postSynth hook's execution is right after synthesis has been completed, and provides a means in which we can manipulate the deployment artifacts (Terraform Config, CloudFormation template, etc).

The validate hook is only executed after all compilation and synthesis has been completed. This is important as this hook is meant to serve as a way to examine and validate deployment artifacts without concerns that some later process will mutate them.

Plugins In Actionโ€‹

No blog post on a new feature would be complete without a walk-through :) So lets walk through the process of writing our own plugin. Not just any plugin though, but one that will help our favorite underappreciated platform engineer, Susan. As more teams have started adopting Wing in her organization she has realized that she can make use of plugins to help teams meet requirements for deploying applications into the cloud, without asking them to rewrite their code.

She has identified a common use case where her organization has deployed an IAM role into every AWS Account using nested stacks. The existence of this role as a permission boundary for all IAM roles is enforced through AWS organization SCPs (Service Control Policies). Thus, without it no IAM role can be created in the account, this is a cause of friction for teams that want to get their Wing applications deployed quickly. (whew thats a whole lot of things a developer should not have to care about)

She has decided to write a plugin that implements 2 hooks preSynth and validate. During the preSynth hook, she wants to add the required permission boundary to all IAM roles in the construct tree. Then she intends to validate the existence of the permission boundary on all roles during the validate hook. This way teams can know their app will fail to deploy at compile time rather than deploy, making for faster feedback loops.

Susan starts with writing the bare necessities of a plugin. She creates a file named permission-boundary-compliance.js and adds the following code:

// Add permission boundary to all IAM roles
exports.preSynth = function (app) { }

// Validate that all IAM roles have a permission boundary
exports.validate = function (config) { }

She plans to make use of a concept from CDKTF known as Aspects to traverse the construct tree and add the permission boundary. She can safely use this since she knows her intended target will be Terraform for AWS.

const iam_role = require("@cdktf/provider-aws/lib/iam-role");
const cdktf = require("cdktf");

class PermissionBoundaryAspect {
constructor(permissionBoundaryArn) {
this.permissionBoundaryArn = permissionBoundaryArn;
}

visit(node) {
if (node instanceof iam_role.IamRole) {
node.permissionsBoundary = this.permissionBoundaryArn;
}
}
}

// Add permission boundary to all IAM roles
exports.preSynth = function (app) {
if (!process.env.PERMISSION_BOUNDARY_ARN) {throw new Error("env var PERMISSION_BOUNDARY_ARN not set")}
cdktf.Aspects.of(app).add(new PermissionBoundaryAspect(process.env.PERMISSION_BOUNDARY_ARN))
}

So above we can see she created a new Aspect class that implements the visit method. Each node the aspect visits will be checked to determine if it is an IAM role and if so, set the permission boundary which was passed into the plugin through an environment variable PERMISSION_BOUNDARY_ARN.

Finally for her validate step, she will simply traverse the Terraform config for all IAM roles and check if the permission boundary is set. Even though her preSynth hook will have already done the job, she knows that preSynth is a mutable hook and that another plugin may have altered things after.

// Validate that all IAM roles have a permission boundary
exports.validate = function (config) {
for (const iamRole of Object.keys(config.resource.aws_iam_role)) {
const role = config.resource.aws_iam_role[iamRole];
if (!role.permission_boundary) {
throw new Error(`Role ${iamRole} does not have a permission boundary`);
}

if (role.permission_boundary !== process.env.PERMISSION_BOUNDARY_ARN) {
throw new Error(`Role ${iamRole} has incorrect permission boundary. Expected: ${process.env.PERMISSION_BOUNDARY_ARN} but got: ${role.permission_boundary}}`);
}
}
}

Now Susan can use the plugin in her CD pipelines to ensure that all IAM roles have the correct permission boundary set, without imposing this non-functional requirement on the application developers. Susan will go on to write more plugins to help her organization meet their security and compliance requirements. She is no longer the underappreciated platform engineer we know from the beginning of our blog post, but rather a hero with her own corner office, private parking spot, and an on call pager that never goes off.

Susan Is A Fictional Character

The outcome of her success is purely speculative. Your company may not have corner offices so there is a chance you will have to just settle for the parking spot.

Ask Not What Your Plugin Can Do For You...โ€‹

This new plugin system is very exciting, and has a lot of possibilities. However, if it is to ever reach its full potential we need your help! If you have some ideas for useful plugins, or thoughts on additional hooks, or even just questions about how to make use of the plugin system, we want to hear from you! Open a pull request or an issue on our GitHub also join our community slack and let us know what you think.

Want to read more about Wing plugins? Check out our plugin documentation for more information on the plugin system. For more code examples visit our plugin code examples

ยท 6 min read
Chris Rybicki

There are two ways to create resources in the cloud: in preflight, or in inflight. In this post, I'll explore what these terms mean, and why I think most cloud applications should avoid dynamically creating resources in inflight and instead stick to managing resources in preflight using tools like IaC.

Today, the cloud computing revolution has made it easier than ever to build applications that scale to meet the demands of users. However, as the cloud has become more prevalent, it has also become more complex.

One of the important questions you'll have to answer in order to build an application with AWS, Azure, or Google Cloud is: how should I create the cloud resources for my application?

For simple applications, you can get away with creating resources by clicking around in the cloud console. But as your application grows, a more structured approach is necessary. Infrastructure as code (IaC) tools like Terraform and CloudFormation have become popular for this purpose.

In general, there are two ways to create cloud resources for an application: before the application starts running, as part of the deployment process, and while the application is running, as part of the data path. We refer to these two phases of the application's lifecycle as preflight and inflight. Clever, ha?

In the cloud ecosystem, many cloud services do not make a hard distinction between APIs that manage resources and APIs that use those resources. For example, in AWS's documentation for SQS, operations like CreateQueue and SendMessage are listed side by side. The same goes for Google Cloud's Pub/Sub service.

However, there are significant differences between these two types of APIs in practice. This post will explore why I believe most cloud applications should avoid dynamically creating resources in inflight and, instead, focus on managing resources in preflight using tools like IaC.

Resource management is hardโ€‹

First, dynamic resource creation introduces enormous complexity from a resource management perspective. This is the main reason why the IaC tools were created. Not only is it too cumbersome and error-prone to create large numbers of cloud resources by clicking buttons in your web browser, but it also becomes difficult to reliably maintain, update, and track the infrastructure. This is especially true as you start to pay attention to the cost of your application.

When you use tools like Terraform or CloudFormation, you typically create a YAML file or JSON file that describes resources in a declarative format. These solutions have several benefits:

  • By using version control, it's easier to identify where resources came from or when they were changed among different versions of your app (especially across apps and teams).
  • Provisioning tools can detect and fix "resource drift" (when the actual configuration of a resource differs from the desired configuration).
  • You can estimate the cost of your workload based on the list of resources using tools like infracost.
  • It's more straightforward to clean up / spin down your application, since all of the resources in your app are tracked in the file.

When resources are created, updated, and deleted dynamically as part of an application's data path, we lose many of these benefits. Iโ€™ve heard of many cases where an application was designed around creating resources dynamically, and entire projects and teams had to be dedicated just to writing code that garbage collects these resources.

There are a few kinds of applications that require dynamic resource creation of course (like applications that provision cloud resources on behalf of other users), but these tend to be the exception to the rule.

Static app architectures are more resilientโ€‹

Second, dynamic resource creation can make your application more likely to encounter runtime errors in production. Resource creation and deletion typically requires performing control plane operations on the underlying cloud provider, while most inflight operations only require data plane operations.

Cloud services are more fault tolerant when they only depend on data plane operations as part of the business logic's critical path. This is because even if the control plane of a cloud service has a partial outage (for example, if AWS Lambda functions could not be updated with new code), the data plane can continue running with the last known configuration, even as servers come in and out of service. This property, called static stability, is a desirable attribute in distributed systems, and most cloud platforms are designed around these tradeoffs.

Dynamic resource creation requires broader security permissionsโ€‹

Lastly, dynamic resource creation means your code needs to have admin-like permissions, which dramatically increases the attack surface for bad actors.

In the cloud, most machines ultimately need some form of network access - whether itโ€™s to connect with other VMs in a cluster, or to connect to other cloud services (like automatically scaling databases and messaging queues).

When resources are statically defined, you can narrowly scope these permissions to define which resources are exposed to the public, which resources can call which endpoints, and even which teams can view sensitive data (and how data accesses are logged and audited).

How to follow best practices... in practice?โ€‹

I believe the best way to write applications for the cloud is to define your resources in preflight, and then use them in inflight. That's why Wing, the programming language my team and I are building, encourages developers to create resources in preflight as the easiest path to follow. We think the distinction between preflight and inflight is critical, which is why we've built it into the language itself. For example, if you try to create a resource in a block of code that is labeled with an inflight scope, Wing will produce a compiler error:

bring cloud;

let queue = new cloud.Queue();
queue.on_message(inflight (message: str) => {
// error: Cannot create the resource "Bucket" in inflight phase.
new cloud.Bucket();
});

Wing is intended to be a general purpose language, so you'll still be able to make API calls to a cloud providers (through network requests or JavaScript/TypeScript libraries) to dynamically create resources if you really want to. But in these scenarios, Wing won't provide resource management capabilities or generate resource permissions for you, so it would be your responsibility to manage the resource and ensure they get cleaned up.

If you're curious to learn more, check out our getting started guide or join us on our community slack and share what kinds of applications you're building in the cloud! We would love to hear your feedback about this design -- and if you have use case where dynamically creating resources would be helpful, please share it with us through a GitHub issue or on this blog's discussion post! โค๏ธ

ยท 8 min read
Elad Ben-Israel

Chris Rybicki has recently added support for let var to Wing (see the pull request), and I thought it might be a good opportunity to share our thoughts on the topic of immutability in Wing.

One of Wing's design goals is to help developers write safer code. Change in state is a major source of complexity (and bugs) in software. Eric Elliott's Dao of Immutability describes it beautifully:

"The true constant is change. Mutation hides change. Hidden change manifests chaos. Therefore, the wise embrace history"

A language-level guarantee that state cannot change offers opportunities for caching, runtime optimizations and lock-free concurrency. Those attributes are very useful in distributed systems.

Immutable by defaultโ€‹

This is why, similarly to other modern programming languages such as Rust and Go, we are designing Wing to be immutable by default.

Let's look at an example:

let my_array = [1,2,3,4];

The above code defines an immutable array with the contents [1,2,3,4] and assigns it to my_array. Immutability means that the contents of the object cannot be modified.

So if we try to add an item:

my_array.push(5);
// ^^^^ Unknown symbol "push"

Eventually we would want this error to be something like Operation "push" is only available on mutable arrays. Did you mean to declare the array with MutArray<num>?, but bear with us...

This is because the type of my_array is Array<num>, which represents an immutable array, it simply doesn't have any methods that will cause it to change. In Wing, the following types are immutable: str, num, bool, Array<T>, Set<T> and Map<T>.

If I wanted to define it as a mutable array, I will need to be explicit:

let my_mut_array = MutArray<str>["hello", "world"];

And now we can go wild:

my_mut_array.push("go wild!"); // OK!

Similarly, we can define other mutable collection types:

let my_set = MutSet<str>{"hello", "world"};
let my_map = MutMap<bool>{"dog": true, "cat": false};

By the way: we are still debating if the standard types should be pascal-cased (e.g. Array<T>, MutArray<T>) or snake (array<T>, mut_array<T>). Let us know what you think!

Yes! We are going to make this slightly harder to define mutable collections.

In the future, maybe we will introduce some syntactic sugar like:

let x = mut [1,2,3]; // <-- not a doctor

This design concept is what's called "good cognitive friction" (or "mechanical sympathy"). It is introduced intentionally in order to make sure the user understands the system better and encourage best practices.

Reassignabilityโ€‹

But immutability is not enough! Since we reference our array through my_array, the compiler also needs to guarantee that my_array will always point to the same object.

Let's look at a hypothetic example:

let i = 10;
new cloud.Function(inflight () => { print(i); }) as "f1";
i = 20;
new cloud.Function(inflight () => { i = i + 9; }) as "f2";
i = i - 90;

What value will the cloud function print? We can't tell because i is reassigned in multiple locations and there is absolutely no way to determine its value.

This is where reassignability comes into play. In fact, in Wing, the above example would have failed compilation:

   i = 20;
// ^ variable i is not reassignable

OK, now we can relax. The Wing compiler tells us that i is not reassignable.

Reassignability is a form of mutability (it is mutating the reference) and most modern programming languages are trying to encourage single assignment. let in Rust, := in Go, and const everywhere in JavaScript.

So how do you make something reassignable? You can use let var:

let var s = "hello";
s = "world";

You can also use var in class and resource declarations:

class Foo {
i: num;
var s: str;


init() {
// all non-optional fields must be assigned at construction (not implemented yet)
this.i = 10;
this.s = "world";
}

bar() {
// "var" fields can be reassigned at any time
this.s = "hello";

this.i = 20;
// ^ i is not reassignable
}
}

It can also be used in argument declarations:

let handler = inflight (var x: str) => {
if x == "hello" {
x = "${x} world";
}
};

Why let var?โ€‹

We originally considered using var instead of let var, but we realized this is making it too easy to do the wrong thing. Entire code bases will be written with just var and mountains of linters will be written to protect you from shooting yourself in the foot.

Going back to this concept of "good cognitive friction". If you need to type a few more characters in order to make a variable reassignable (let var versus let), you will likely just use let most of the time, and the world will be a better place with less bugs and happier developers.

The Inflight Connectionโ€‹

So how is all this related to cloud development?

One of the very cool things about immutable state is that the compiler can create as many copies of it as needed. If the compiler has a guarantee that a blob of data will never change over the lifetime (and space) of the system, it can simply distribute it where it is needed.

This means, that in Wing, immutable data can be seamlessly referenced from any inflight context.

Let's look at a very simple example just to explain the idea:

bring cloud;

let my_array = ["hello", "world"];

new cloud.Function(inflight (_: str) => {
assert(my_array.length == 2);
}) as "test";

So what's going on here? We have defined a cloud function that simply references my_array. As much as this looks simple and intuitive, the compiler actually had to do a bit of work to make this happen. As a reminder, a cloud.Function represents a cloud compute platform (such as AWS Lambda). This means that the code inside the inflight block is going to be executed sometime in the future, on some other machine. Completely isolated from the original memory space in which my_array was defined.

Since our array is immutable, the compiler can safely clone it and bundle it together with the code that runs inside the cloud function.

In the future, the compiler will be able to identify that my_array.length itself is immutable, and will only copy its value (see #1251).

If we try to reference a reassignable variable from inflight code:

let var s = "hello";

new cloud.Function(inflight (_: str) => {
print(s);
// ^ Cannot capture a reassignable variable "s"
});

If we try to reference a mutable collection from inflight code:

bring cloud;

let my_array = MutArray<num>[1,2,3,4];

new cloud.Function(inflight (_: str) => {
assert(my_array.length == 4);
// ^^^^^^^^ Cannot reference 'my_array' of type 'MutArray<num>' from an inflight context
});

In this case as well, the compiler won't allow us to reference a mutable object within an inflight context, because it won't be able to guarantee correctness.

Unsupported yet, but we will also have clone() to cover you in case you want to reference a snapshot of a mutable collection (clone_mut() returns a mutable clone):

let mut_arr = MutArray<num>[1,2,3];
let arr = mut_arr.clone();

new cloud.Function(inflight () => {
assert(arr.length == 3);
});

See this pull request if you are curious how immutable capturing works in Wing (for the time being).

What about user-defined types?โ€‹

In the current revision of the language specification, we still haven't covered the idea of immutable user-defined types (its on our roadmap).

This means that the compiler only allows capturing primitives, Array, Map, Set, Json (coming soon) and structs (coming soon). Any other type cannot be captured directly. This means you will likely need to extract any information from the object in order to reference it within an inflight context.

Summaryโ€‹

There are endless ways to express ideas using code and we believe a programming language should be designed to make it intuitive for developers to write better, safer and more robust code. We use "good cognitive friction" such as let var and MutXxx to get our brain to spare another cognitive cycle on choosing some programming approach.

Making Wing "immutable by default" is designed to encourage developers to write more functional and immutable code. We continue to think of how to do it in elegant, simple, and not annoying ways, and we would love your feedback and suggestions on Wing Slack.

ยท 10 min read
Elad Ben-Israel

A manifesto for cloud-oriented programming.

Don't get me wrong, I love the cloud! It has empowered me to build amazing things, and completely changed the way I use software to innovate and solve problems.

It's the "new computer", the ultimate computer, the "computerless computer". It can elastically scale, it's always up, it exists everywhere, it can do anything. It's boundless. It's definitely here to stay.

But holy crap, there is no way this is how we are going to be building applications for the cloud in the next decade. As the cloud evolved from "I don't want servers under my desk" to "my app needs 30 different managed services to perform its tasks", we kind of lost track of what a great developer experience looks like.