Jul 24, 2022

Reducing Database Round-Trips with Boring Queries

Note: This post is the first part of a series on exploring the benefits and drawbacks of engineering for the application and database layer.

If you’ve ever wondered whether you should build this feature using a database query or as part of your application layer, this post is for you.

If we can choose between fetching just the data we need for the specific use case versus fetching all items, we would probably always choose the former, just because it feels more elegant. But in some cases, this may not be the best decision.

Usually, applications grow over time and we keep adding more and more of those queries: Step by step, we perform more requests over the network, and in most cases, individual queries are fast. If we look at the big picture, though, we find that we send hundreds of queries that operate on the same data set.

Imagine you are building a typical SaaS application that provides a role management system where users can create custom roles. You want to display a list of roles in your application and you need a way to count the number of custom roles to calculate whether service quotas are reached.

Now, you might be drawn to create two functions

getCustomRoles(...): CustomRoles[]
countCustomRoles(...): number

Easy, right? But what if we always load all custom roles in places where we count them anyway? This does not necessarily have to be in the same part of the code but assume we require roles and number of roles in the same transaction.

In that case, you could probably refactor countCustomRoles to use the list of roles instead of sending a COUNT-query to the database, right?

If you think this doesn’t feel right, I totally get you. Aren’t we overthinking this a bit? In no way would simple queries like counting be the issue here, right? Also, it’s super fast on my local machine.

Unfortunately, we have little to no control over the network that sits in between our application and database. If we arbitrarily assume that each database query incurs 5ms of round-trip-time (RTT), sending 100 calls back and forth adds a measurable processing delay.

If your application relies on a substantial amount of database requests, strategies like reusing what’s already available in the application layer, even if it means fetching more data ahead of time, can help reduce network delay.

Of course, this approach is not without drawbacks: The more data you fetch, the larger the memory footprint grows. Even when data is kept just for the duration of a transaction, changes that affect cached data must be propagated properly to avoid inconsistencies, so you have to handle cache invalidation.

Apart from performance, caching data in the application layer adds more complexity, so you should critically evaluate whether you’ve reached the point where shaving off milliseconds justifies adding complexity that will slow you down. On the flip side, there usually are many small improvements that go a big way in terms of performance gains.