In almost forgotten times, we used to deploy our services to virtual machines by tunneling into the machine, downloading the latest code or image, and running the workload manually. Mostly, this deployment model has been superseded by more automated solutions like hosted container services and PaaS products, but the appeal of simply deploying your app to a remote machine hasn’t completely faded.
Especially for side projects, where high availability is not critical, having a place to put your backend can help you ship orders of magnitude cheaper than on hosted solutions. And when you don’t know whether your side project will take off, spending 30 bucks or more doesn’t sound attractive.
So let’s revisit remote deployments, albeit with a special touch! Let’s talk about deploying Docker containers over SSH.
Sure, you could always run
ssh bruno@remote 'docker run nginx:latest'
But using this programmatically would have been a painful feat of parsing arguments, reading the output, and understanding whether the deployment was successful. It would be much nicer if we could wrap the fact that we want to deploy not to the local but to a remote machine behind a layer of abstraction and let Docker handle the rest.
And, of course, that’s possible! Docker allows setting a lesser-known environment variable DOCKER_HOST
. Setting this to a value targeting your remote machine like the following
export DOCKER_HOST=ssh://bruno@remote
will automatically forward all subsequent calls to the Docker CLI (and docker-compose) to that remote machine. Of course, you need to have Docker installed on both machines, but then, running
docker run nginx:latest
or even
docker-compose up -d
will automagically deploy to the remote machine, even if your docker compose file is available only locally. This is great for environments like CI where you have access to the source code, and simply want to update a deployment without copying over the compose file manually.
Now, it wouldn’t be software engineering if there weren’t some trade-offs. So let’s try to understand the limits of this solution.
Unfortunately, it seems as if you need to be authenticated both on the local machine executing Docker command, as well as on the remote machine where the workloads will be deployed in. If you try deploying an image from a private registry and aren’t signed in to that registry on both machines, your deployment will fail.
A workaround for this could be to run
ssh bruno@remote 'docker pull <private image>'
beforehand, so your Docker client will not even attempt to pull the image. This works without any issues but does force you to run ssh
manually.
Another downside is latency, diffing the state of compose files with current deployments and re-creating containers on a remote machine can take a lot longer than directly running docker-compose up
on said machine.
All things considered, using DOCKER_HOST
is a neat way to deploy containers to some remote machine, and to use docker compose without syncing the compose file. If you’re using managed providers, or have to build for high availability, deploying to one machine might not be suitable.