← All guides
postgresrediscaching

Postgres + Redis: The Most Common Combo Explained

How to add Redis to an existing PostgreSQL application, when it makes sense, and the cache-aside pattern that most production apps use.

8 min read · 23 April 2026

If you’re going to take one step into polyglot persistence, this is the one. Postgres and Redis together is the most common database combination in production — not because it’s trendy, but because the two databases solve fundamentally different problems that almost every serious application eventually encounters.

PostgreSQL is your source of truth. Redis is your speed layer. Together they cover the vast majority of what most applications need.

Why PostgreSQL alone eventually isn’t enough

PostgreSQL is extraordinary. It handles complex queries, enforces data integrity, supports JSON, full-text search, and can scale further than most teams will ever need. For a huge number of applications, Postgres alone is the right answer for a long time.

But two specific problems appear as applications grow:

Repeated expensive reads. Your homepage loads user profiles, product listings, and configuration data. Each page load fires the same queries against the same data that hasn’t changed in hours. PostgreSQL executes them faithfully every time — but there’s no reason to. The data is the same. You’re paying the round-trip and query execution cost for no gain.

Session storage. Storing sessions in PostgreSQL works, but it creates a high-volume, low-value write workload. Every request reads the session. Sessions expire and need cleaning up. The data is temporary by nature. PostgreSQL’s durability guarantees are overkill for something designed to be thrown away.

Redis solves both problems elegantly.

What Redis actually is

Redis is an in-memory data store. Everything lives in RAM, which makes reads and writes orders of magnitude faster than a disk-based database like PostgreSQL.

It supports expiry natively — every key can have a time-to-live (TTL) after which it disappears automatically. This makes it perfect for anything temporary: sessions, rate limiting counters, cached responses, queues.

The tradeoff is that RAM is limited and expensive. You don’t put everything in Redis — only the data that benefits from being fast and temporary.

The cache-aside pattern

The most common way to use Redis alongside PostgreSQL is called the cache-aside pattern. The application code manages the cache explicitly:

async function getUserById(id) {
  const cacheKey = `user:${id}`;

  // 1. Check Redis first
  const cached = await redis.get(cacheKey);
  if (cached) {
    return JSON.parse(cached);
  }

  // 2. Cache miss — fetch from PostgreSQL
  const user = await db.query(
    'SELECT * FROM users WHERE id = $1',
    [id]
  );

  // 3. Store in Redis with a 1 hour TTL
  await redis.set(cacheKey, JSON.stringify(user), 'EX', 3600);

  return user;
}

The logic is simple: check Redis first. If the data is there, return it immediately. If not, fetch from PostgreSQL, store the result in Redis, and return it. The next request for the same data hits Redis.

This pattern reduces PostgreSQL load dramatically for read-heavy workloads. Profile pages, product listings, configuration data, public API responses — anything that is read frequently but changes infrequently is a good candidate.

Session storage with Redis

Sessions are an even simpler use case. Store the session token as the key and the session data as the value, with a TTL matching your session expiry:

// Store session
await redis.set(
  `session:${token}`,
  JSON.stringify({ userId: 123, role: 'admin' }),
  'EX',
  86400 // 24 hours
);

// Read session
const session = await redis.get(`session:${token}`);

// Delete on logout
await redis.del(`session:${token}`);

When the TTL expires, Redis cleans it up automatically. No scheduled jobs, no DELETE queries on an aging sessions table.

When to add Redis

You don’t need Redis from day one. Add it when one of these becomes true:

  • Your database CPU is high and your query logs show the same queries running repeatedly with the same results
  • You’re storing sessions in PostgreSQL and it’s causing write load
  • Your API response times are inconsistent and profiling points to repeated database reads
  • You’re implementing rate limiting and need fast atomic increment operations

If none of these apply, Postgres alone is fine. Don’t add complexity before it earns its place.

Cache invalidation — the hard part

The classic joke in computer science is that there are only two hard things: naming things and cache invalidation. With Redis and PostgreSQL, you need a strategy for keeping them in sync.

The simplest approach: invalidate on write. When you update a user in PostgreSQL, delete their Redis key:

async function updateUser(id, data) {
  // Update PostgreSQL
  await db.query(
    'UPDATE users SET name = $1 WHERE id = $2',
    [data.name, id]
  );

  // Invalidate cache
  await redis.del(`user:${id}`);
}

The next read will be a cache miss, fetch fresh data from PostgreSQL, and repopulate Redis. Simple, reliable, and easy to reason about.

Setting it up locally

The postgres-redis-starter template on our GitHub gives you a complete local setup with Docker Compose. One command starts both databases:

docker compose up

Everything — PostgreSQL, Redis, connection boilerplate, and the cache-aside pattern — is pre-configured and ready to use.

What’s next

Postgres and Redis covers the majority of production use cases. The next step for most applications is search — which is where Elasticsearch comes in.