Supabase Postgres performance is less about discovering hidden knobs and more about understanding how Postgres itself is designed to share resources, and what limits those resources.

Let’s watch a query play out. Imagine this SELECT statement against a users table with a million rows.

SELECT * FROM users WHERE email = 'test@example.com';

Without any tuning, Postgres might scan the entire table, row by row, to find that one user. That’s Seq Scan, and it’s slow. If email has an index, Postgres can jump straight to the right pages using Index Scan. This is faster, but indexes have their own costs.

The most impactful settings control how Postgres uses RAM and disk.

shared_buffers: This is Postgres’s main RAM cache. It’s where Postgres keeps frequently accessed table and index data.

  • Diagnosis: SHOW shared_buffers;
  • Common Cause: Too small. If shared_buffers is tiny (e.g., 128MB), Postgres will constantly fetch data from disk, even if it’s been read recently.
  • Fix: Increase it. A common starting point is 25% of system RAM, but don’t exceed 40% for production systems. For a 16GB RAM server, try ALTER SYSTEM SET shared_buffers = '4GB';
  • Why it works: Larger shared_buffers means more data stays in RAM, drastically reducing disk I/O for repeated reads.

effective_cache_size: This tells Postgres how much memory is available for caching, including OS file system cache. It doesn’t allocate memory, but influences the query planner’s decisions.

  • Diagnosis: SHOW effective_cache_size;
  • Common Cause: Too low. If Postgres thinks there’s little cache available, it might favor sequential scans over index scans, even when an index would be better.
  • Fix: Set it higher than shared_buffers. For a 16GB server with shared_buffers at 4GB, try ALTER SYSTEM SET effective_cache_size = '12GB';
  • Why it works: A higher effective_cache_size makes the planner more confident that data read into cache will stay there, leading it to choose more efficient index-based plans.

maintenance_work_mem: This is used for maintenance operations like VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY.

  • Diagnosis: SHOW maintenance_work_mem;
  • Common Cause: Too small for large tables. If maintenance_work_mem is too low, CREATE INDEX can take ages and consume more I/O overall because it has to spill intermediate results to disk.
  • Fix: Increase it significantly for index creation. For a large table, you might temporarily set ALTER SYSTEM SET maintenance_work_mem = '1GB'; (and remember to SELECT pg_reload_conf(); or restart Postgres).
  • Why it works: More RAM for maintenance tasks means fewer disk writes and faster completion of these heavy operations.

work_mem: This is for individual operations within a query, like sorting (ORDER BY, DISTINCT) or hashing.

  • Diagnosis: SHOW work_mem;
  • Common Cause: Too low. If work_mem is too small, complex sorts or hash joins will spill to disk, turning fast in-memory operations into slow disk operations.
  • Fix: Increase it. For complex queries, ALTER SYSTEM SET work_mem = '16MB'; is a common starting point. Be cautious: if set too high and many queries run concurrently, you can exhaust RAM. It’s often better to tune work_mem per session or for specific problematic queries.
  • Why it works: Allows more intermediate sort/hash results to stay in RAM, avoiding slow disk spills.

max_worker_processes and max_parallel_workers: These control how many background processes and parallel worker processes Postgres can use.

  • Diagnosis: SHOW max_worker_processes; and SHOW max_parallel_workers;
  • Common Cause: Too low on multi-core systems. If these are set low, Postgres won’t leverage your CPU cores effectively for parallel queries or background tasks.
  • Fix: Align them with your CPU cores. For a 16-core server, you might set ALTER SYSTEM SET max_worker_processes = 16; and ALTER SYSTEM SET max_parallel_workers = 8; (leaving some cores for OS and other processes).
  • Why it works: Enables Postgres to distribute query execution or background tasks across multiple CPU cores, speeding up large operations.

random_page_cost: This setting influences the planner’s cost estimate for fetching random pages from disk.

  • Diagnosis: SHOW random_page_cost;
  • Common Cause: Default is 4.0, which assumes traditional spinning disks. On modern SSDs, random I/O is much faster, so this cost is overestimated.
  • Fix: Lower it. For SSDs, try ALTER SYSTEM SET random_page_cost = 1.1;
  • Why it works: Lowering random_page_cost makes the planner more likely to choose index scans (which often involve random page fetches) over sequential scans, as it now estimates them to be cheaper.

The most common misconception is that shared_buffers is the only memory setting that matters. In reality, work_mem and effective_cache_size are often just as, if not more, critical for query performance, especially when dealing with sorts, joins, and ensuring the query planner makes optimal choices based on available caching.

The next challenge is understanding how VACUUM and ANALYZE keep your statistics fresh and prevent bloat, which directly impacts query performance.

Want structured learning?

Take the full Supabase course →