Over the years plenty of tools to simplify managing your vast infrastructure and deployments emerged. Whether it was Terraform that sought to create declarative representations of cloud resources, or later on Kubernetes with its custom resource definitions (CRDs) connected to the underlying cloud platform, there were many attempts to reduce the effort of deploying completely reproducible infrastructure that evolves with your product's lifecycle.
Not only do we need to provision databases, computing instances, containers, messaging queues, and all other sorts of resources used by our mesh of services, we want to be able to update them by changing a piece of configuration once, for multiple regions without having to visit the web interface and switching regions manually, and finally, we want to be able to tear them down when we don't need them anymore.
While this sounds simple in theory, there's a lot of state and dependency management involved under the hood, which sometimes results in frustrating levels of complexity involved in being able to move quickly and deploy your application. What's more, having to learn a custom configuration language (Hcl) or managing mountains of YAML files without losing your sanity over indentation and missing autocompletion, most concepts just weren't ready for mass adoption by developers, who'd ultimately like to deploy their work as easily as they wrote it.
Now that we went over the benefits we want to see, as well as the barriers of many existing solutions, let's transition to a tool that's growing in popularity and tries to solve the outlined problems in a way that should get us up to speed quickly: Pulumi offers an SDK in multiple programming languages that enables you to "define and deploy cloud apps and infrastructure in code". In code does not mean having to manage endless configuration files, though, currently, you're able to utilize TypeScript, JavaScript, Python, and Go to define and manage your infrastructure.
Connected to their cloud offering (the Pulumi Console), you will be able to access the history of your deployments and collaborate with your team members. This is by no means a vendor lock-in, as you're able to store the state of your deployment (all resources managed by Pulumi) on disk or other cloud storage options like S3.
<div align="center"> <img src="https://www.pulumi.com/images/docs/reference/state_saas.png" width="80%" loading="lazy" alt="pulumi architecture" /> <br /> <span style="text-align: justify;"> As you can see, Pulumi will store its state and other details of <br /> your deployments in its own cloud by default, which you can <a href="https://www.pulumi.com/docs/intro/concepts/state"> change </a> if needed. </span> </div>When it comes to the range of support for the endless cloud providers out there, Pulumi has got you covered: AWS, Azure, Google Cloud, Kubernetes, as well as countless other providers can be configured and used for your project. If you're still missing resources, you can easily create custom providers to handle resources of all kinds.
As for the terms that we'll use frequently later on, all of your infrastructure is defined in a program
developed using one of the supported languages listed above. A project
is the logical unit containing the program, and you can configure stacks
for each type of deployment if
you might have to provision multiple targets, each of which might be configured slightly different from the other. If you deploy your infrastructure only one way, you'll end up with one stack.
You can also add configuration for your provider (for example AWS credentials and regions) or your program (e.g. naming conventions, existing resource names) in your stack file or by using the CLI. Secrets use the same system but will be encrypted, with options to customize the encryption provider (such as AWS KMS, Azure Key Vault or HashiCorp Vault).
Organizing your stacks and projects is completely up to you, you could go with a monolithic architecture where you deploy all of your infrastructure in one project, however, you can also easily separate resources to your liking, whatever works for you.
To show you just how easy it is to get started with Pulumi on AWS (or any other provider available, I'll go with AWS here), let's create a test setup. If you haven't installed the Pulumi CLI yet, head over to the installation instructions, or simply run
āÆ brew install pulumi
if you're running on macOS. Up next, we'll create our Pulumi project folder and initialize the project by running
# Sign in to Pulumi
āÆ pulumi login
# Create your project directory
mkdir pulumi-test && cd pulumi-test
# Create your Pulumi project
pulumi new aws-go
inside of our new directory. Pulumi will ask us how to name our stack, we can simply go with anything, for now, I'll use test
.
If we check out our project once again, we'll see that Pulumi generated a couple of files we'll be using
.
āāā Pulumi.test.yaml
āāā Pulumi.yaml
āāā go.mod
āāā go.sum
āāā main.go
Pulumi.test.yaml
is our stack configuration file. All configuration values we'll set will be stored here.Pulumi.yaml
describes your project as a whole, for example the language usedmain.go
contains the business logic to set up the resources we want
Before we start, we need to add the credentials needed to create and manage AWS resources to our stack configuration. This can be done by running
āÆ pulumi config set aws:profile <your profile name>
If you haven't set up an AWS profile, make sure you've got the AWS CLI installed and run
# set up AWS profile, name can be
# left blank to go with "default"
āÆ aws configure --profile <name of your profile>
Let's say we want to start off by creating an S3 bucket, making it publicly-available, and adding a simple object to it. Luckily Pulumi's example code already creates a bucket resource, which we can change slightly to grant access to all visitors:
pulumi.Run(func(ctx *pulumi.Context) error {
// Create an AWS resource (S3 Bucket)
bucket, err := s3.NewBucket(ctx, "my-bucket", &s3.BucketArgs{
Acl: pulumi.String("public-read"),
})
if err != nil {
return err
}
})
You will notice a couple of things: Firstly, Pulumi itself runs as a function, invoked in your main func. What's more, instead of supplying regular data types like strings when you create a resource, Pulumi expects you to use so-called inputs. Comparable to Promises in JavaScript, inputs are Pulumi's method to deal with asynchronous values. In some cases, your resource hasn't been created and you still want to use its data. The easy solution for this is the fact that every resource-modifying operation will return outputs which can be exported to the CLI, making them visible when you're setting up your infrastructure, or compatible as arguments for other functions. While it might sound a bit abstract, this will be important in the following steps.
Now let's try it out and run
āÆ pulumi up
Previewing update (test):
Type Name Plan
+ pulumi:pulumi:Stack pulumi-test-test create
+ āā aws:s3:Bucket my-bucket create
Resources:
+ 2 to create
Do you want to perform this update?
yes
> no
details
If you're greeted with a similar result, everything's working! If not, you might have to check whether you set up your AWS credentials (as described a few steps earlier). As you can see, Pulumi will create an AWS S3 Bucket as part of its setup, we'll choose "yes" and go ahead.
It might take a while, but in the end, you should see something like the following:
Do you want to perform this update? yes
Updating (test):
Type Name Status
+ pulumi:pulumi:Stack pulumi-test-test created
+ āā aws:s3:Bucket my-bucket created
Outputs:
bucketName: "my-bucket-e04048b"
Resources:
+ 2 created
Duration: 41s
Because we didn't explicitly name our bucket resource, Pulumi assigned it a unique identifier combined with our resource name. The outputs visible are the result
of running ctx.Export(...)
on an output type.
Now, let's continue and create our first object in the bucket.
obj, err := s3.NewBucketObject(ctx, "bucket-item", &s3.BucketObjectArgs{
Bucket: bucket.ID(),
Key: pulumi.String("test"),
Content: pulumi.String("hello world"),
ContentType: pulumi.String("text/plain"),
Acl: pulumi.String("public-read"),
})
if err != nil {
return err
}
ctx.Export("objectId", obj.ID())
This time around, we want to name our file test
, and add some basic content to it. Because we want to add the file to our previously-created bucket, we somehow
need to reference it. Of course, we don't really know how it will be called beforehand, that's precisely why the input-output system in Pulumi is built the way it is.
We can easily add the bucket.ID()
here, which will be resolved once the bucket is created.
Let's run pulumi up
again and check what's going to happen:
āÆ pulumi up
Previewing update (test):
Type Name Plan
pulumi:pulumi:Stack pulumitest-test
+ āā aws:s3:BucketObject bucket-item create
Outputs:
+ objectId : output<string>
Resources:
+ 1 to create
2 unchanged
Do you want to perform this update?
yes
> no
details
As you can see, it doesn't contain our bucket anymore. This is because Pulumi figured out that we don't want to change our existing bucket resource. If we were to change some properties, it would be listed here again. Some resource updates are unable to proceed without replacing the resource and creating a new instance of it, you'll be warned when that is about to happen.
Once again, our update succeeded
Do you want to perform this update? yes
Updating (test):
Type Name Status
pulumi:pulumi:Stack pulumi-test-test
+ āā aws:s3:BucketObject bucket-item created
Outputs:
bucketName: "my-bucket-e04048b"
+ objectId : "test"
Resources:
+ 1 created
2 unchanged
Duration: 11s
Now it would be amazing to retrieve the URL of our created object, to access it! For this to work, we just have to add another export
ctx.Export(
"objectUrl",
pulumi.Sprintf(
"https://%s.s3.%s.amazonaws.com/%s",
bucket.ID(),
bucket.Region,
obj.ID(),
),
)
After running pulumi up
one final time, we're greeted with our object URL:
+ objectUrl : "https://<...>.s3.<...>.amazonaws.com/test"
And if we open the link up in our browser, we can see our wonderful plain-text hello world!
Of course, it wouldn't be a full guide without removing the resource we created again, returning to a clean slate.
āÆ pulumi destroy
Previewing destroy (test):
Type Name Plan
- pulumi:pulumi:Stack pulumi-test-test delete
- āā aws:s3:BucketObject bucket-item delete
- āā aws:s3:Bucket my-bucket delete
Outputs:
- bucketName: "my-bucket-e04048b"
- objectId : "test"
- objectUrl : "..."
Resources:
- 3 to delete
Do you want to perform this destroy?
yes
> no
details
Now, we've got rid of the bucket and its object. If we even want to remove the stack, we can run
āÆ pulumi stack rm test
While this doesn't represent an infrastructure at scale, it helps to show some of the benefits. Resources can be created using the standard providers Pulumi offers, which are often the same code that is also used by Terraform providers. Some aspects of managing your infrastructure this way might seem similar to using Terraform, sometimes even exactly the same due to the underlying code being the same as well.
And don't get me wrong with all of this, managing your infrastructure still is difficult. But by using tools like Pulumi, we can shift much of the complexity that other workflows force us to adopt into a domain we understand. Being able to write regular Golang or TypeScript code feels great from the start. You're able to leverage all the language features you know, build abstractions and pull in other libraries.
The only natural downside of a workflow that keeps track of dependencies and allows you to check an execution plan before you run potentially dangerous operations is, that you'll have to get used to the input and output system. This is the only unnatural element you'll probably face throughout the development cycle, as you often cannot use variables as you'd hope for. But as we've demonstrated in the last step, Pulumi offers a lot of helpers around this structure.
I hope I could get you interested in the future of Infrastructure-as-Code tools with the help of Pulumi's awesome implementation of this idea. I'm looking forward to seeing more products emerge from this trend. If you've got any questions, suggestions, or feedback in general, don't hesitate to reach out on Twitter or by mail.