Supabase Postgres config tuning often boils down to understanding that shared_buffers and max_connections aren’t independent knobs, but rather a delicate dance where one’s optimal setting directly impacts the other’s feasibility.
Let’s see this in action. Imagine a Supabase project with a Postgres instance that’s starting to feel sluggish under load. We’re seeing slow query times and occasional connection timeouts.
Here’s a typical postgresql.conf snippet from a moderately sized instance, focusing on memory and connections:
shared_buffers = 1GB
max_connections = 150
work_mem = 4MB
maintenance_work_mem = 64MB
effective_cache_size = 3GB
And here’s what the SHOW command in psql or the Supabase SQL Editor would reveal for these parameters:
SHOW shared_buffers;
-- Output: 1024MB
SHOW max_connections;
-- Output: 150
SHOW work_mem;
-- Output: 4MB
SHOW maintenance_work_mem;
-- Output: 64MB
SHOW effective_cache_size;
-- Output: 3GB
The core problem Supabase users often face is that they see max_connections limited by their instance’s RAM, and shared_buffers consuming a significant chunk of that RAM. When shared_buffers is set too high, it leaves insufficient memory for the operating system and other Postgres processes, leading to swapping and poor performance. Conversely, setting max_connections too high, even if shared_buffers is reasonable, can exhaust available RAM as each connection requires its own memory allocation.
Here are the common culprits and their fixes:
-
shared_bufferstoo high for available RAM.- Diagnosis: Check your instance’s RAM. For example, a
db-nanoinstance has 1GB of RAM. Ifshared_buffersis set to1GB, that leaves nothing for the OS or other processes. - Check: Run
SHOW shared_buffers;and compare it to your instance’s total RAM. - Fix: Reduce
shared_buffers. A common recommendation is 25% of total RAM. For a 1GB instance, tryshared_buffers = 256MB. - Why it works: This frees up RAM for the operating system’s file system cache and for individual connection memory, preventing swapping and improving overall system responsiveness.
- Diagnosis: Check your instance’s RAM. For example, a
-
max_connectionstoo high for available RAM.- Diagnosis: Each connection, even idle ones, consumes memory. If you have many connections, this can quickly add up, especially if
work_memis also generous. - Check: Run
SHOW max_connections;andSHOW work_mem;. Estimate total memory usage:max_connections * (approx_connection_overhead + work_mem). A rough overhead per connection is around 1-2MB. - Fix: Lower
max_connections. If you’re not actively using hundreds of connections, trymax_connections = 100. For very small instances, even50might be appropriate. - Why it works: Reduces the aggregate memory footprint of all active and idle connections, preventing the system from running out of RAM.
- Diagnosis: Each connection, even idle ones, consumes memory. If you have many connections, this can quickly add up, especially if
-
work_memtoo high, especially with many connections.- Diagnosis:
work_memis used for sorting, hashing, and other operations per operation. If you have a complex query with multiple sorts and joins,work_memcan be consumed multiple times within a single query. Ifmax_connectionsis high andwork_memis also high, memory can be exhausted very quickly. - Check: Run
SHOW work_mem;. If it’s set to16MBor higher and you have hundreds of connections, this is a likely culprit. - Fix: Lower
work_mem. Start conservatively, e.g.,work_mem = 4MB. Tune upwards only if specific queries show they are spilling to disk (identified byEXPLAIN ANALYZE). - Why it works: Limits the memory each individual operation within a query can consume, preventing a cascade of memory exhaustion when multiple connections execute complex operations concurrently.
- Diagnosis:
-
effective_cache_sizenot reflecting actual available cache.- Diagnosis:
effective_cache_sizetells the query planner how much memory is available for disk caching (OS cache +shared_buffers). If it’s set too high, the planner might assume data is cached when it’s not, leading to inefficient plans. If it’s too low, it might avoid using cached data. - Check: Run
SHOW effective_cache_size;. Compare this to(Total RAM - shared_buffers - connection overhead). - Fix: Set
effective_cache_sizeto a value that represents roughly 50-75% of your total RAM. For a 1GB instance withshared_buffers = 256MB,effective_cache_size = 768MBis a good starting point. - Why it works: Provides a more accurate hint to the query planner about available caching, enabling it to generate better execution plans that leverage both
shared_buffersand the OS’s file system cache.
- Diagnosis:
-
Insufficient
maintenance_work_memfor vacuuming/indexing.- Diagnosis: While not directly related to connection limits, insufficient
maintenance_work_memcan lead to slow maintenance operations, which indirectly impacts performance and can cause bloat, requiring more resources overall. - Check: Run
SHOW maintenance_work_mem;. If it’s very low (e.g.,1MB), largeVACUUMoperations or index builds will be slow. - Fix: Increase
maintenance_work_mem. A common recommendation is64MBor128MBfor most instances. - Why it works: Allows
VACUUM,CREATE INDEX, andALTER TABLE ADD FOREIGN KEYoperations to use more memory for sorting and hashing, making them significantly faster and more efficient.
- Diagnosis: While not directly related to connection limits, insufficient
-
Not using Supabase’s default connection pooler (PgBouncer).
- Diagnosis: If you’re manually managing connections or not leveraging Supabase’s built-in PgBouncer, you might be opening direct connections to Postgres for every client request. This is inefficient and quickly hits
max_connections. - Check: Verify your application’s connection string. If it connects directly to the Postgres port (e.g.,
5432), you’re likely not using the pooler. - Fix: Ensure your application connects to the PGBOUNCER port provided in your Supabase connection details. The default
max_connectionsin Postgres is often set higher than needed when using PgBouncer, as PgBouncer manages a pool of fewer, persistent connections to Postgres. You can often setmax_connectionsin Postgres to a value like200or300if your application connects via PGBOUNCER. - Why it works: PgBouncer maintains a smaller number of actual connections to the database, reusing them for many client requests. This drastically reduces the load on Postgres and allows for higher application-level concurrency without exhausting
max_connections.
- Diagnosis: If you’re manually managing connections or not leveraging Supabase’s built-in PgBouncer, you might be opening direct connections to Postgres for every client request. This is inefficient and quickly hits
After adjusting these parameters, you’ll likely need to restart your Postgres instance in Supabase for the changes to take effect. The next error you’ll hit is probably a relation "..." does not exist if you’ve accidentally mistyped a table name during your tuning session.