At Anzu, our mission is to help teams move faster, happier. We want to help make the work of building and shipping services fade into the background, so you can focus on building instead. One of the big pillars to achieving this mission is to reduce the effort associated with integrating cloud services with your applications.
You probably loathe the task of connecting to services like databases, buckets, queues, and more: Searching for a driver library, setting up the SDK, and securely storing and accessing credentials all take up valuable time and energy.
Anzu takes care of all of the manual work and toil: Declare a service, connect any resource, and synchronize your codebase, that’s it! Services bridge the gap between infrastructure management and code, and Anzu glues it all together, without the need for user action.
Connecting a database
Previously
Before Anzu, you would have started by deploying your database, followed by selecting a database driver to use. Then, you would have passed an environment variable around, but that was, of course, pretty unsafe, so you opted for a solution like AWS Secrets Manager. Okay, for local development you would have used a different setup, but let’s put that aside for now. Assuming you managed to insert the connection string into Secrets Manager, you would have then fetched the value at runtime and created a connection to your database.
With Anzu
Using Anzu, your journey starts off in the dashboard. From a wide range of possible providers, select the resource you want to create. Then, create your service and connect the previously-created database to it. Then, set up how you want to deploy your service by creating any resource as a deployment target and passing the service token that you’ll need to authenticate your service later on.
Up next, let’s sync your codebase!
After setting up the Anzu CLI, specify
export ANZU_ORG_ID=<org id>
export ANZU_PROJECT_ID=<project id>
and run
# replace API with your service name
anzu sync API
You should see a couple of new files pop up! With that done, let’s wire up the generated code to our application entry point. All that’s needed is to call the NewAPIService
function, and Anzu takes care of the rest in the background.
package main
import (
"anzu-service-test/generated"
"context"
"fmt"
)
func main() {
ctx := context.Background()
svc, err := generated.NewAPIService(ctx)
if err != nil {
panic(err)
}
rows, err := svc.Connections.Database.Query(ctx, "SELECT table_name FROM information_schema.tables")
if err != nil {
panic(err)
}
defer rows.Close()
for rows.Next() {
var tableName string
err = rows.Scan(&tableName)
if err != nil {
panic(err)
}
fmt.Println(tableName)
}
err = rows.Err()
if err != nil {
panic(err)
}
}
That’s all there is to it! You might be wondering what happened in the background to make this work. There are many steps involved, so let’s break it down to the most important ones.
1. Providers offer cloud resources
With provider resources, actual cloud resources can be represented as a set of operations and state. This works similar to what you might be accustomed to from IaC tools like Terraform or Pulumi.
In the end, developing a provider involves declaring the resources that the provider will offer, and implementing an interface for every resource.
In our example, we used a PostgreSQL in Docker resource which, as the name implies, deploys a simple Docker container running Postgres.
2. Providers can declare Service Connectors
Once the resource is implemented, providers can declare a Service Connector that bridges the gap between resources and services or to put it differently, between the cloud and your codebase.
For this to work, the provider developer added the following declaration to the provider configuration as part of the PostgreSQL in Docker resource
"serviceConnectors": [
{
"kind": "database_pool",
"name": "Database Pool",
"description": "Creates and connects to a database pool using jackc/pgx.",
"stacks": [
"Go"
],
"inputs": [
{
"name": "maxConnections",
"value": {
"kind": "scalar",
"expectedUnderlyingType": "integer",
"rendering": {
"displayName": "Max Connections",
"shortDescription": "The maximum number of connections to allow to the database."
}
}
}
]
}
]
As we can see, the new Service Connector creates a database pool and supports code generation for Go. Regardless of the language, the provider was written in, provider developers can distribute Service Connectors for all supported languages and frameworks from one provider codebase.
Configuring a Service Connector yielded some generated code to parse input values such as maxConnections
, which are made available at runtime. All the developer had to implement for the database pool was the following
package docker_postgres_over_ssh
import (
"context"
"github.com/anzuhq/sdk/value-sdk-go"
"github.com/jackc/pgx/v4/pgxpool"
)
func NewDatabasePoolConnection(ctx context.Context, inputs []value.Input, connectedResourceOutputs []value.Output) (*pgxpool.Pool, error) {
parsedInputs, err := GetDatabasePoolInputs(inputs)
if err != nil {
return nil, err
}
parsedOutputs, err := GetDatabasePoolResourceOutputs(connectedResourceOutputs)
if err != nil {
return nil, err
}
cfg, err := pgxpool.ParseConfig(parsedOutputs.ConnectionString)
if err != nil {
return nil, err
}
if parsedInputs.MaxConnections != nil {
cfg.MaxConns = int32(*parsedInputs.MaxConnections)
}
pool, err := pgxpool.ConnectConfig(ctx, cfg)
if err != nil {
return nil, err
}
return pool, nil
}
This is where we can see the power of Service Connector. We received inputs for the Service Connector and outputs of the connected resource (i.e. the database), and the developer decided to return a pointer to pgxpool.Pool
, a popular Go database driver for Postgres. Then, they simply used the generated methods to parse values, and connected to the pool.
What might not be immediately obvious is what the developer did not have to do: All values they needed were passed into the function. Validation and parsing were handled by the generated methods. The only real task the provider develop worked on was to decide which inputs to accept, which type to return, and establish the database connection given the values.
This workflow makes it extremely straightforward to create new bindings between cloud resources and codebases, and I’m excited to see what our developers can make possible with Service Connectors.
3. Anzu takes care of the rest
An often overlooked feat is how Anzu generates the code developers need and transparently transmits values safely to services. All of this is possible because of the high level of integration between our core components: The resource graph holds all information required to generate code and our Value SDK creates an abstraction for data sent across systems, with built-in features for outputs, configuration values, and other values not known ahead-of-time.
In the end, Anzu is a giant state machine and coordination system for your cloud infrastructure, we take care of deploying your workloads, help you understand what you’re running, and make it easy to connect cloud services like building blocks.
Plans for the future
Services and Service Connectors are the first of many tools that make working with cloud services easier. We’re constantly improving the developer experience to make sure our workflows feel great and involve as little friction as possible, empowering developers to focus on the task at hand.
If this post caught your interest, please check out our product and play around a bit!