If you’ve worked on a product that depends on multiple services and surrounding cloud resources, you’ve probably wondered about the best way to set things up for local development. Most teams default to the good old Docker Compose setup supported by some scripts, and that works until you run into the limits of a static configuration.
Let’s go over the typical stack and see how teams develop locally and what we could change to make the experience even better.
Containerized Services
The core building block of every setup, no doubt, is a bunch of frontend and backend services. Unless you’re running both frontend and backend in one service and don’t have to worry about storing data, you’re probably already deploying your workloads as containers, for reproducible builds and the ability to run your stack anywhere.
Docker Compose is an easy way to spin up multiple containers, connected to the same virtual network, with volumes attached for persistent storage. If your setup is mostly static, and you don’t care about multiple deployments, you’ll probably be in good hands with Docker Compose.
If you want more control over multiple deployments of your services or an improved templating experience that allows for higher granularity when configuring local containers, read on.
Local Services
Running containers on your dev machine is alright, but at times, you may want direct access for an improved debugging experience. While plugins for IDEs exist that allow you to debug applications running in Docker, sometimes it’s just easier to start a specific service you’re working on locally, outside of Docker.
If you’re using Docker Compose already, you may want to move local environment variables off to a shared env file, so you can share the same variables between containers and local debugging. You’ll still have to overwrite network addresses and URLs to match the different environments (you cannot break out of a container into the host network for obvious reasons), but the remaining variables should work.
Cloud Resources
Most products depend not only on in-house services but managed, third-party cloud resources like buckets, databases, queues, and other primitives. You’ll want to create these resources for each instance of a development setup, so for each team member. Otherwise, you’d run into weird cases where messages start appearing in a queue even though you haven’t triggered an action, you should probably save yourself and your team from that headache.
While you can run some cloud resources such as S3-compatible buckets locally, you probably want to be running similar resources to your real setup, so that parts of your system behave the same way regardless of the deployment target. Some services might not even be available locally.
Configuring each resource manually is a recipe for mismatched configuration, so picking up infrastructure as code along the way is a good investment. Chances are your team is already using IaC for production and preview deployments, so using the same tooling for all environments makes sharing even easier.
Once you’re using IaC for cloud resources and containers for running services locally, you’ll have to figure out how to store environment variables for the current environment. Up next, we’ll check out possible solutions for this issue.
IaC all the way
When you’re using infrastructure-as-code tools for production and development cloud resources, you might be tempted to manage the entire infrastructure setup including local development with IaC. As Docker providers do exist, this may even be possible, but probably not very ergonomic to work with.
Introducing Atlas
Combining all ideas outlined above, I started working on a solution to improve the local development experience. Please note that what I’m about to show is mostly an experiment and not an active project because I’m at capacity building Anzu, but it might be an interesting starting place for new tools.
Atlas provides the necessary concepts and abstractions for developing multiple services in dynamic configurations completely locally. It’s built with multi-region and multi-tenant SaaS applications in mind, which may require running more than just one virtual environment for development purposes. It’s exclusively designed for local development and does not aspire to solve all other problems out there. It is built on top of Docker to build images and run services in isolation.
At the core of Atlas is the concept of an Atlasfile, a configuration written in your language of choice, evaluated when running the Atlas CLI. Writing your configuration as code completely removes the friction of a limited templating experience and allows you to pull in configuration from external places like IaC tools.
Atlas expects multiple teams to contribute to the complete product, which means that specific teams own specific services. Configuring services close to code helps keep the local development experience in sync with code changes, solving the issue of drift between production and development.
package main
import (
"fmt"
"github.com/brunoscheufler/atlas/atlasfile"
"github.com/brunoscheufler/atlas/sdk/atlas-sdk-go"
"os"
)
func main() {
err := sdk.Start(&atlasfile.Atlasfile{
Services: []atlasfile.ServiceConfig{
{
Name: "api",
Artifact: &atlasfile.ArtifactRef{
Artifact: &atlasfile.ArtifactConfig{
Name: "api",
},
},
},
},
})
if err != nil {
fmt.Printf("could not start atlasfile: %s", err.Error())
os.Exit(1)
}
}
Stacks combine services to run together, sharing networks and exposing ports to the host system. As everything is just code in the end, stacks can be generated with a simple for loop, using the power of a full programming language.
package main
import (
"fmt"
"github.com/brunoscheufler/atlas/atlasfile"
"github.com/brunoscheufler/atlas/sdk/atlas-sdk-go"
"os"
)
func main() {
err := sdk.Start(&atlasfile.Atlasfile{
Stacks: []atlasfile.StackConfig{
{
Name: "regional",
Services: []atlasfile.StackService{
{
Name: "api",
},
{
Name: "worker",
},
},
},
},
})
if err != nil {
fmt.Printf("could not start atlasfile: %s", err.Error())
os.Exit(1)
}
}
When running atlas up
, you can optionally specify the stacks to start or keep the default and start all known stacks. Before building Atlas determines the build order, in case service images depend on each other, by creating an artifact graph. This means you don’t have to worry about building the right image at the right time.
For local development, individual service containers can be started and stopped, and atlas env
generates an env file with all variables needed for running locally, with the right hostnames set.
Atlas helps to reuse what you already have (Dockerfiles, infrastructure-as-code) and becomes a single source of truth for your local development environment.