In my recent post, I talked about how we reduced deployment times of our Infrastructure-as-Code workflow with Pulumi. The benefits of managing all resources you're deploying in a declarative way are manifold, including observability of what you're running, security in the sense of using encrypted secrets, and composability as all big providers are supported out of the box and you can even write your own additions if needed.
Expanding on this concept, what if we could manage our containers in Pulumi as well? Imagine we want to set up a local development environment or we're deploying containers manually instead of using a managed service like Amazon ECS, Google Cloud Run, or Fly.io.
In those cases, our goal is to get containers running in our target environment. We don't want to manage them ourselves, though, but rather declare what we want to run and let Pulumi take care of getting there. To have a development environment that mimics production with only a few to no differences, we also want to use external services from other providers, such as queues (SQS) and mailing (SES) provided by AWS.
With the Docker provider for Pulumi, which is based on the upstream Docker Terraform provider, we can do just that.
Creating our first container
Once you've set up Pulumi and Docker on your machine, create a new project with
pulumi new typescript
You might be asked to sign in and create an account if you haven't done so yet. After confirming the project and stack name, and waiting for Pulumi to create everything, let's install the Docker provider with
npm i @pulumi/docker
Now that we have everything in place, let's jump into index.ts
and create our first container!
import * as docker from '@pulumi/docker';
// Pull nginx image from registry
const nginxImage = new docker.RemoteImage('nginx', {
name: 'nginx:1.21.3-alpine'
});
// Create new docker container and expose 80 at 8080
new docker.Container('nginx', {
image: nginxImage.latest,
ports: [{ internal: 80, external: 8080 }]
});
The statements above will create two resources: A remote image resource that will pull the specified nginx image to the machine and a container resource that will run the image and expose ports.
To create the resources, let's run pulumi up
. After confirming, all resources should be created. Heading over to http://localhost:8080 you should see the default page of nginx.
This was pretty simple, right? We simply defined the container we want to run as a resource. You didn't have to write a weird YAML file. And what's even better, you can deploy the resource remotely as well using SSH.
When updating containers, they are often replaced as they cannot be edited in place. This might lead to a brief downtime, so be careful when this happens. Sometimes, Pulumi can end up in a mismatch with the Docker container deleted but the resource still existing. Usually, another run will fix that, but in the worst case, you might have to fix the stack.
Running remotely
If you have a server running SSH and Docker, you can run the container on it as well! Simply create a new stack with pulumi stack init
, then expose the following variable to connect to your machine via SSH.
export DOCKER_HOST=ssh://user@remote-host:22
When you run pulumi up
, Pulumi will use your local SSH credentials to connect to the referenced server. This means that any host and private keys need to be set up on the machine invoking the Pulumi CLI.
Once connected, Pulumi will perform the same actions, pulling images to the server using the configured credentials, and running the containers on the remote instance.
If you always want to connect to the same host or want to manage where to connect in Pulumi, you can create a provider resource and pass it to both the remote image and container resources. This will override the default or environment-based settings.
import * as docker from '@pulumi/docker';
const remoteInstance = new docker.Provider('remote', {
host: 'ssh://user@remote-host:22'
});
const nginxImage = new docker.RemoteImage(
'nginx',
{
name: 'nginx:1.21.3-alpine'
},
{ provider: remoteInstance }
);
new docker.Container(
'nginx',
{
image: nginxImage.latest,
ports: [{ internal: 80, external: 8080 }]
},
{ provider: remoteInstance }
);
Integrating external services
Moving all resources into Pulumi has more benefits than just declaring your containers as code: You can integrate other cloud providers to create your infrastructure exactly how you need it.
For local environments, this means that you no longer have to manually create and copy access tokens and other dependencies such as queues, but you can just spin up a new stack including tokens, services, and your own containers.
import * as pulumi from '@pulumi/pulumi';
import * as docker from '@pulumi/docker';
import * as aws from '@pulumi/aws';
// Create new user for API
const user = new aws.iam.User('api');
// Attach existing (managed) policy to user
new aws.iam.UserPolicyAttachment('s3-read', {
policyArn: 'arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess',
user: user.name
});
// Generate access key for user
const accessKey = new aws.iam.AccessKey('api', { user: user.name });
const apiService = new docker.RemoteImage('api', {
name: 'api:latest'
});
// Create API container
new docker.Container('api', {
image: apiService.latest,
ports: [{ internal: 4000, external: 4000 }],
// Add environment variables
envs: [
// Store secret access key as secret
pulumi.secret(
pulumi.interpolate`AWS_SECRET_ACCESS_KEY=${accessKey.secret}`
),
pulumi.interpolate`AWS_ACCESS_KEY_ID=${accessKey.id}`,
'AWS_REGION=eu-central-1'
]
});
Persisting data with volumes
In case you need to persist data between container instances, you can create a volume resource and attach it to a container. This can be used for database containers, for example.
import * as pulumi from '@pulumi/pulumi';
import * as docker from '@pulumi/docker';
// Replace this with an actual configuration secret or random.RandomString
const dbPassword = pulumi.secret('randomSecretGoesHere');
const postgresImage = new docker.RemoteImage('postgres', {
name: 'postgres:latest'
});
const dbData = new docker.Volume('db-data');
new docker.Container('db', {
image: postgresImage.latest,
ports: [{ internal: 5432, external: 5432 }],
// Add environment variables
envs: [
// Store db password as secret
pulumi.secret(pulumi.interpolate`POSTGRES_PASSWORD=${dbPassword}`)
],
volumes: [
{ volumeName: dbData.name, containerPath: '/var/lib/postgresql/data' }
]
});