Nov 09, 2024

Understanding When to Use Redis

Before joining Inngest earlier this year, I had never used Redis (or its successor, Valkey) or any comparable system. Whenever I planned data persistence for a new service architecture, I would choose a reliable data store like Postgres. I would argue that ACID guarantees and a track record of stability outweigh any shiny technologies promising unlimited scalability or complete flexibility.

It’s awkward to admit, but I actually believed Redis was just a key/value store. More specifically, I never imagined keys could be anything other than bytes, let alone powerful data structures. The second I started to read up on Redis, I changed my mind. After spending most of my working hours the past six months working with Redis, I want to present cases for and against using Redis, Valkey, or any related technology.

Before we get into this, let’s consider why we need to persist data, why it sometimes makes sense to choose a data store optimized for specific access patterns, and why Redis is a great match if you’re looking for a high-performance data structure store, both persistent and ephemeral.

Applications are rarely stateless

This probably won’t come as a shock, but it’s good to When you’re building a SaaS product or any piece of modern software really, you’ll need to persist state. This may be anything from data required for operations over user data all the way to data for bookkeeping like logs and history.

In old times, this state could have lived on a single machine, but nowadays, you’ll have to consider fault tolerance and chances are, you need to serve customers in multiple regions around the world. Simply put, one server storing all state in memory doesn’t cut it anymore.

There are many data stores, offering different durability guarantees, data modeling paradigms, scaling options, and other properties. And this is where it gets interesting: How do you choose a data store suitable for your application?

Data structures vs. relations

I grew up in a world of relational database management systems. Tables became my hammer for every problem. This is obviously silly: Relational databases have their use cases, but forcing data into a model it wasn’t made for will cause problems down the road.

Most notably, this means poor ergonomics and performance issues: Traditional relational databases are optimized for index-heavy access operations, carefully scanning as few rows as possible. Complex query planners are designed to optimize data access based on historical statistics. You can select index types, sure, but a lot of performance characteristics are predetermined and leave little control.

Redis isn’t opinionated on how you structure your data. Instead, it provides a wide range of in-memory data structures: Strings, Hashes (maps, dictionaries), Lists, Sets, Sorted Sets (my favorite), and more.

You might wonder why that’s exciting, aren’t those just the basic building blocks of any programming language? You’d be right and wrong: When you’re building applications for a single node, these data structures are omnipresent. Redis allows you to keep this simple data layout while allowing you to access and modify this state from any client, written in any language.

As simple as these data structures sound, they cover almost any use case: Want to implement a queue? Use a list or, if you’re fancy, a sorted set. Want to index data for exact matches? Use a hash for O(1) access. Locks? You’re covered.

If this sounds exciting, you may wonder if there are any downsides. That depends on your requirements. Like most decisions in engineering, there’s no one good or bad solution out there.

Choosing the right data store

Engineering is all about tradeoffs, and this applies to selecting the right data store. If we’re building a latency-sensitive production system, we can lay out some hard requirements:

  • Availability: The system must withstand production workloads without downtime. This may require failover instances, multi-zone or multi-region replication, and most definitely
  • Durability: Data must be persisted on disk. If the machine unexpectedly goes down, every stored record must be present when it becomes available again. To prevent losing data from missing writes during downtimes, see the Availability section for measures like automatic failover.
  • Horizontal Scaling: Capacity planning is hard, and relying on a single instance can only buy so much time. Zero-downtime instance resizing is often impossible. Adding new nodes to the system must be easy and downtime during potential rebalancing operations must be kept to a minimum.

There are other requirements including reliable disaster recovery, the remaining ACID properties (atomicity, consistency, isolation).

Redis is interesting in that it offers atomic operations on a global keyspace, supports durability with snapshots and append-only files, and allows horizontal scaling through sharding the global keyspace across a distributed cluster. You can even write Lua scripts to support atomic operations across multiple Redis commands.

Yet, there are downsides: Redis is, at its core, single-threaded. And even though individual operations are really fast, you’ll inevitably run into CPU bottlenecks if you’re unable to distribute your data across shards. Horizontal scaling only works through sharding, but once you distribute your keys across different slots, you lose atomicity: Multi-slot operations are prohibited, and that includes Lua scripts.

I’m really excited about the release of Valkey 8: The Valkey maintainers have put in lots of effort to implement I/O multithreading while keeping execution single-threaded and sequential in nature, which boosts throughput and reduces latency across the board.


Redis and its derivatives are incredibly versatile data stores enabling applications both small and large in scale. If you’re aware of the pitfalls when it comes to scaling using clustering and sharding, you can get really far with this one component.