Supabase Postgres config tuning often boils down to understanding that shared_buffers and max_connections aren’t independent knobs, but rather a delicate dance where one’s optimal setting directly impacts the other’s feasibility.

Let’s see this in action. Imagine a Supabase project with a Postgres instance that’s starting to feel sluggish under load. We’re seeing slow query times and occasional connection timeouts.

Here’s a typical postgresql.conf snippet from a moderately sized instance, focusing on memory and connections:

shared_buffers = 1GB
max_connections = 150
work_mem = 4MB
maintenance_work_mem = 64MB
effective_cache_size = 3GB

And here’s what the SHOW command in psql or the Supabase SQL Editor would reveal for these parameters:

SHOW shared_buffers;
-- Output: 1024MB

SHOW max_connections;
-- Output: 150

SHOW work_mem;
-- Output: 4MB

SHOW maintenance_work_mem;
-- Output: 64MB

SHOW effective_cache_size;
-- Output: 3GB

The core problem Supabase users often face is that they see max_connections limited by their instance’s RAM, and shared_buffers consuming a significant chunk of that RAM. When shared_buffers is set too high, it leaves insufficient memory for the operating system and other Postgres processes, leading to swapping and poor performance. Conversely, setting max_connections too high, even if shared_buffers is reasonable, can exhaust available RAM as each connection requires its own memory allocation.

Here are the common culprits and their fixes:

  1. shared_buffers too high for available RAM.

    • Diagnosis: Check your instance’s RAM. For example, a db-nano instance has 1GB of RAM. If shared_buffers is set to 1GB, that leaves nothing for the OS or other processes.
    • Check: Run SHOW shared_buffers; and compare it to your instance’s total RAM.
    • Fix: Reduce shared_buffers. A common recommendation is 25% of total RAM. For a 1GB instance, try shared_buffers = 256MB.
    • Why it works: This frees up RAM for the operating system’s file system cache and for individual connection memory, preventing swapping and improving overall system responsiveness.
  2. max_connections too high for available RAM.

    • Diagnosis: Each connection, even idle ones, consumes memory. If you have many connections, this can quickly add up, especially if work_mem is also generous.
    • Check: Run SHOW max_connections; and SHOW work_mem;. Estimate total memory usage: max_connections * (approx_connection_overhead + work_mem). A rough overhead per connection is around 1-2MB.
    • Fix: Lower max_connections. If you’re not actively using hundreds of connections, try max_connections = 100. For very small instances, even 50 might be appropriate.
    • Why it works: Reduces the aggregate memory footprint of all active and idle connections, preventing the system from running out of RAM.
  3. work_mem too high, especially with many connections.

    • Diagnosis: work_mem is used for sorting, hashing, and other operations per operation. If you have a complex query with multiple sorts and joins, work_mem can be consumed multiple times within a single query. If max_connections is high and work_mem is also high, memory can be exhausted very quickly.
    • Check: Run SHOW work_mem;. If it’s set to 16MB or higher and you have hundreds of connections, this is a likely culprit.
    • Fix: Lower work_mem. Start conservatively, e.g., work_mem = 4MB. Tune upwards only if specific queries show they are spilling to disk (identified by EXPLAIN ANALYZE).
    • Why it works: Limits the memory each individual operation within a query can consume, preventing a cascade of memory exhaustion when multiple connections execute complex operations concurrently.
  4. effective_cache_size not reflecting actual available cache.

    • Diagnosis: effective_cache_size tells the query planner how much memory is available for disk caching (OS cache + shared_buffers). If it’s set too high, the planner might assume data is cached when it’s not, leading to inefficient plans. If it’s too low, it might avoid using cached data.
    • Check: Run SHOW effective_cache_size;. Compare this to (Total RAM - shared_buffers - connection overhead).
    • Fix: Set effective_cache_size to a value that represents roughly 50-75% of your total RAM. For a 1GB instance with shared_buffers = 256MB, effective_cache_size = 768MB is a good starting point.
    • Why it works: Provides a more accurate hint to the query planner about available caching, enabling it to generate better execution plans that leverage both shared_buffers and the OS’s file system cache.
  5. Insufficient maintenance_work_mem for vacuuming/indexing.

    • Diagnosis: While not directly related to connection limits, insufficient maintenance_work_mem can lead to slow maintenance operations, which indirectly impacts performance and can cause bloat, requiring more resources overall.
    • Check: Run SHOW maintenance_work_mem;. If it’s very low (e.g., 1MB), large VACUUM operations or index builds will be slow.
    • Fix: Increase maintenance_work_mem. A common recommendation is 64MB or 128MB for most instances.
    • Why it works: Allows VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY operations to use more memory for sorting and hashing, making them significantly faster and more efficient.
  6. Not using Supabase’s default connection pooler (PgBouncer).

    • Diagnosis: If you’re manually managing connections or not leveraging Supabase’s built-in PgBouncer, you might be opening direct connections to Postgres for every client request. This is inefficient and quickly hits max_connections.
    • Check: Verify your application’s connection string. If it connects directly to the Postgres port (e.g., 5432), you’re likely not using the pooler.
    • Fix: Ensure your application connects to the PGBOUNCER port provided in your Supabase connection details. The default max_connections in Postgres is often set higher than needed when using PgBouncer, as PgBouncer manages a pool of fewer, persistent connections to Postgres. You can often set max_connections in Postgres to a value like 200 or 300 if your application connects via PGBOUNCER.
    • Why it works: PgBouncer maintains a smaller number of actual connections to the database, reusing them for many client requests. This drastically reduces the load on Postgres and allows for higher application-level concurrency without exhausting max_connections.

After adjusting these parameters, you’ll likely need to restart your Postgres instance in Supabase for the changes to take effect. The next error you’ll hit is probably a relation "..." does not exist if you’ve accidentally mistyped a table name during your tuning session.

Want structured learning?

Take the full Supabase course →