Nov 12, 2020

Boring Technologies, Reliable Foundations

In the ever-changing world of software engineering, keeping track of, not to mention specializing in, a broader range of areas and technologies has become increasingly difficult in the past years.

While a lot of web developers will remember the times of uploading their assets to the production server via FTP, we've moved on to a future of simply pushing our code and letting automation handle the rest. At least that's how it should ideally be. But this simplicity can be deceiving, as we've exponentially grown the moving parts underneath.

Building a system from end to end today requires you to pick and learn one of the available mobile or frontend technologies, while also building and operating backend infrastructure, interfacing with different public cloud providers, and then to think about alerting and monitoring once we plan to move to production.

Surely, there's an alternative path of keeping it simple first, then scaling once it's needed, but even the simple systems of today require more thought, as we've grown to expect increased performance, availability, and observability from our software and our customers have too.

Especially in DevOps, with infinite swaths of Kubernetes YAML configuration, we've invented a lot of complexity overhead in the past couple of years, to the extent of needing yet another set of abstractions.

Starting a software project with all options are on the table can feel exhausting, there's a lot to think about. And getting deep into everything that seems interesting will surely take more time than we have at hand, so we need to make a trade-off.

I think we should pick core technologies in each section that let us move fast and ship quality software. If those technologies are proven (you can also call them "boring"), this will de-risk your operations, so picking tools that have been around for some time (e.g. PostgreSQL) allows you to build on a solid foundation.

Once we've identified a set of technologies we want to focus on, we can go deep. In my case, I've picked the following tools that I've been using for the last couple of years and plan on using in the time ahead

I've had the opportunity to use most aforementioned technologies in production systems throughout the past, which allowed me to get a deeper understanding of how they perform at scale.

For languages and environments, I expect a couple of properties, for example, that I can easily start up a debugger to see what's going on if things don't work. Often, developer experience is similarly, if not even more, important than other reasons to choose a specific piece of tech. In the end, we want to be productive, not putting out fires left and right blindfolded.

One very relaxing aspect of choosing technologies using this approach is that every other piece can be exchanged freely. While we might like our current monitoring solution, for the next project we can use a different solution with ease.

Betting on a solid foundation and choosing the best tools for your current situation allows you to move quickly while being able to try out new things, without feeling like you're being overrun with new experimental technology that's negatively impacting stability.

When building a project, you'll most likely have more important things on your schedule than thinking about which technology to choose, so going with what you know already is hugely beneficial.

Another common misconception I've repeatedly come across is that a specific technology does not scale, thus we should pick another tool. Too often, there's more than one bottleneck in a system, and chances are, that it's not the tool itself. Make sure you've ruled out other possible alternatives before you swap out a technology, otherwise you'll end up in the same place.

And if you still hit the case where something didn't scale, make sure to document the context and decision you made in that situation, it could be quite useful when you hit a roadblock in the future.