← All guides
elasticsearchsearchpostgres

When to Add Elasticsearch to Your Stack

Full-text search is a solved problem — but not by PostgreSQL. Here's when Elasticsearch becomes necessary and how to wire it up alongside your existing database.

9 min read · 22 April 2026

At some point, almost every application needs search. Users want to find products, articles, people, or records by typing words — and they expect it to be fast, forgiving of typos, and ranked by relevance.

PostgreSQL can do search. But it wasn’t built for it. Elasticsearch was.

Here’s how to know when you’ve outgrown PostgreSQL search, and how to add Elasticsearch to your stack when that moment comes.

What PostgreSQL search looks like

PostgreSQL has two main approaches to full-text search.

The first is LIKE queries:

SELECT * FROM products
WHERE name LIKE '%wireless headphones%';

This works for exact substring matches but has serious limitations. It can’t use indexes efficiently, it doesn’t handle typos, it doesn’t rank results by relevance, and it becomes slow on large tables.

The second is PostgreSQL’s built-in full-text search with tsvector and tsquery:

SELECT *, ts_rank(search_vector, query) AS rank
FROM products,
     to_tsquery('english', 'wireless & headphones') query
WHERE search_vector @@ query
ORDER BY rank DESC;

This is genuinely good. It supports stemming, ranking, and can use GIN indexes for performance. For many applications, this is enough.

The limitations appear when you need:

  • Typo tolerance (“wireles headphones” should still return results)
  • Fuzzy matching and phonetic similarity
  • Autocomplete and search-as-you-type
  • Complex relevance tuning (boosting certain fields, recent results, popularity signals)
  • Faceted search (filter by category, price range, rating simultaneously)
  • Search across multiple data types and sources

When those requirements appear, it’s time for Elasticsearch.

What Elasticsearch actually does differently

Elasticsearch is built entirely around one problem: finding relevant documents quickly. Everything about its architecture is optimised for this.

It uses an inverted index — a data structure that maps every word in your corpus to the documents containing it. Finding all documents containing “wireless” is a single index lookup, not a table scan.

It handles typos natively through fuzzy matching. A search for “wireles headphones” returns results for “wireless headphones” automatically, within a configurable edit distance.

It supports relevance scoring out of the box. Results are ranked by how well they match the query, taking into account term frequency, field weighting, and recency.

It handles autocomplete elegantly with completion suggesters and edge n-gram tokenizers — the kind of instant search-as-you-type experience users expect from modern applications.

The architecture: Elasticsearch alongside PostgreSQL

You don’t replace PostgreSQL with Elasticsearch. PostgreSQL remains your source of truth — it stores your data with full ACID guarantees. Elasticsearch is a search index — a read-optimised copy of your data, structured for fast retrieval.

The flow looks like this:

Write path
Application
PostgreSQLsource of truth
Sync processevent / CDC
Elasticsearchsearch index
Read path
Search query
Elasticsearchranked results
Document IDsreturned
PostgreSQLfull records

When a user searches, Elasticsearch returns the IDs of matching documents. You can either return the Elasticsearch documents directly (if they contain all needed fields) or use those IDs to fetch full records from PostgreSQL.

Keeping them in sync

The critical question is: how do you keep Elasticsearch in sync when PostgreSQL data changes?

Option 1 — Synchronous dual write. When you write to PostgreSQL, also write to Elasticsearch in the same request:

async function createProduct(data) {
  const product = await db.query(
    'INSERT INTO products (name, description, price) VALUES ($1, $2, $3) RETURNING *',
    [data.name, data.description, data.price]
  );

  await esClient.index({
    index: 'products',
    id: product.id,
    document: {
      name: product.name,
      description: product.description,
      price: product.price,
      created_at: product.created_at
    }
  });

  return product;
}

Simple but fragile — if the Elasticsearch write fails, your index is out of sync.

Option 2 — Event-based sync. Write to PostgreSQL, emit an event, and have a background worker index into Elasticsearch asynchronously. More resilient but adds infrastructure.

Option 3 — Change Data Capture (CDC). Tools like Debezium watch PostgreSQL’s write-ahead log and stream changes to Elasticsearch automatically. The most robust approach for production systems.

For most teams starting out, Option 1 or 2 is fine. Move to CDC when you need guaranteed consistency at scale.

When to add Elasticsearch — the checklist

Add Elasticsearch when at least one of these is true:

  • Users are complaining that search doesn’t find what they’re looking for
  • You need typo tolerance and your PostgreSQL pg_trgm solution feels like a hack
  • You need autocomplete / search-as-you-type
  • You need faceted filtering (filter results by multiple attributes simultaneously)
  • Your search queries are causing high CPU load on PostgreSQL
  • You’re searching across multiple tables and the query complexity is getting unmanageable

Don’t add it speculatively. Elasticsearch adds operational complexity — another system to run, monitor, and keep in sync. Earn it by hitting one of the above.

A simpler alternative: Meilisearch

If Elasticsearch feels like too much for your current scale, consider Meilisearch. It’s a simpler search engine with excellent typo tolerance and relevance ranking, much easier to operate, and open source. It covers 80% of use cases with 20% of the operational complexity.

The tradeoff: less power and flexibility than Elasticsearch for advanced use cases.

What’s next

Once you have Postgres, Redis, and Elasticsearch, you have the foundation that covers most production applications. The next frontier — especially relevant in 2026 — is adding AI-powered search and recommendations with vector databases.