While developing production services in Go over the past year, I've noticed a couple of errors sneak in repeatedly, caused by the same set of pitfalls given by the Go language and ecosystem. Fixing the aforementioned issues was strikingly simple in most of the cases, but debugging tended to be a painful journey leading up to the same culprit more than once.
I figured I should collect and share those couple of points I identified, so we get to spend more time on building applications and less time on wrapping our head around randomly-freezing services, non-deterministic test snapshots, and other beautiful scenarios.
♻️ Ensuring resources are released properly
When consuming a resource, whether it's a database pool client or a file that you've opened to be read, you'll want to make sure that it is closed after all work on it is completed. The common pattern to this solution is to use Go's defer keyword to ensure that the logic to return a client to the pool or to close an open file is called in every case, whether your current function returns an error, or not.
// Whatever happens in this function, the file we open will be closed
func doWork() error {
// Open our file, handle potential error
file, err := os.Open("/tmp/dat")
if err != nil {
return err
}
// Make sure to close file again
defer file.Close()
}
In case of using pgx
to perform a database operation, the classic example is to acquire a client from a connection pool and ensure it is released after using it,
because otherwise your application will simply block and freeze once all pool clients are used up.
func doWork(ctx context.Context, pool *pgxpool.Pool) error {
// We'll acquire a pool client and make sure it is released again
client, err := pool.Acquire(ctx)
if err != nil {
return err
}
defer client.Release() // Always make sure to call this!
}
🕰 Multiple dependency versions and type equality
In this case, we dive a bit into the way Go and Go modules work. Let's say we're using v1
of a library to classify food, and the library returns a set of possible options like this
import "food-library/v1"
// Detect whether image bytes submitted contain coffee
func IsCoffee(input []byte) (bool, error) {
// The library will return a special type for the food it detected
result, err := food.Detect(input)
if err != nil {
return false, err
}
// We could switch over the type to see if the input contains coffee
switch foodType := result.(type) {
case *food.Coffee:
// This is the case we want to handle
return true, nil
default:
return false, nil
}
}
In this case, we're using the types provided by the library to detect whether an arbitrary piece of data contains coffee, and it works great. But there are a couple of ways this could evolve. Let's say we move the detection itself into another file
import "food-library/v2"
// Detect whether image bytes submitted contain coffee
func DetectFood(input []byte) (interface{}, error) {
// The library will return a special type for the food it detected
result, err := food.Detect(input)
if err != nil {
return nil, err
}
return result, nil
}
import "food-library/v1"
// Detect whether image bytes submitted contain coffee
func IsCoffee(input []byte) (bool, error) {
// We'll call our own function now
result, err := DetectFood(input)
if err != nil {
return false, err
}
// We could switch over the type to see if the input contains coffee
switch foodType := result.(type) {
case *food.Coffee:
// This is the case we want to handle
return true, nil
default:
return false, nil
}
}
Did you spot it? In our DetectFood
function, we started to consume v2
of the library, while the IsCoffee
logic still uses v1
. What do you think will happen?
This small change will result in every type check missing, even if we've detected a coffee. As the second version of our detection library can live completely isolated from the first, Go will not regard the types as equal either. This means, that assertions to types of version one will not equal the types exposed by version two.
This also comes into effect when a library you use under the hood uses a different version of a shared library, say you use an abstraction layer for your database operations on top of pgx v3, but your application code tries to handle Postgres errors using pgx v4. When your abstraction layer returns the error, it will be a version 3 the error, while you're expecting one from version 4.
Another example is go-jwks which uses go-jose. If you use the JWKS client to retrieve a JSON Web Key Set, you'll receive the type from github.com/square/go-jose
.
This means your application has to use the same version of go-jose, you can't just use gopkg.in/square/go-jose.v2
as recommended, as the types wouldn't match up.
🗂 Relying on Map Key order
Last but not least, when you want to rely on the order of elements, make sure not to use Maps in Go. For performance reasons, the keys are not ordered in any way, so actions such as marshalling will result in changing results every time you perform them. Especially in cases like test snapshots, this can become extremely frustrating. You'll have to resort to solutions like ordering keys alphabetically on the receiving side or using other types such as slices and structs, which stay in order.
This concludes my list for now, chances are you experienced one of those problems before already, and if not, you'll surely stumble over it at some point. If you've got any questions, suggestions, or feedback in general, don't hesitate to reach out on Twitter or by mail.