The pg_stat_statements extension in PostgreSQL is your go-to for uncovering performance bottlenecks, and Supabase leverages it beautifully. This isn’t just about "slow queries"; it’s about identifying which queries are consuming disproportionate resources and how they’re doing it.

Let’s see pg_stat_statements in action. Connect to your Supabase project via psql (or your preferred client) and run this query:

SELECT
  calls,
  total_time,
  rows,
  mean_time,
  stddev_time,
  query
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;

This gives you a snapshot of your top 10 queries by average execution time. You’ll see the number of times each query has been executed (calls), the total time spent on it across all executions (total_time), the number of rows returned (rows), the average time per execution (mean_time), the standard deviation of execution times (stddev_time), and the query text itself.

The real power comes from understanding what these metrics tell you. A high calls count combined with a moderate mean_time might indicate a query that’s executed very frequently and, while individually fast, collectively drains resources. Conversely, a low calls count but an astronomically high mean_time points to a single, very expensive query. stddev_time is crucial too; a high standard deviation suggests inconsistent performance, often pointing to issues with caching, data distribution, or external factors.

The problem pg_stat_statements solves is the "black box" nature of database performance. Without it, you’re guessing which queries are problematic. This extension provides concrete data, allowing you to move from "my app feels slow" to "this specific SELECT * FROM users WHERE email = $1 query is taking 500ms on average and is called 1000 times per minute."

To use pg_stat_statements, it needs to be enabled on your Supabase project. It’s usually enabled by default, but if not, you can enable it via the Supabase dashboard under "Database" -> "Extensions".

Internally, pg_stat_statements works by instrumenting your PostgreSQL server. It tracks the execution of every SQL statement, aggregating statistics into a special table (pg_stat_statements). This tracking happens at a low level, capturing metrics like execution time, block I/O, and shared memory usage for each unique query signature.

The exact levers you control are primarily which queries you focus on and how you optimize them. Once pg_stat_statements points you to a problematic query, your optimization strategies can include:

  • Adding or refining indexes: This is the most common fix. If pg_stat_statements shows a query scanning large tables without an index, creating one on the relevant columns (WHERE clauses, JOIN conditions) is paramount. For example, if you see SELECT * FROM products WHERE category_id = $1 with high total_time, and category_id isn’t indexed, run CREATE INDEX idx_products_category_id ON products (category_id);. This dramatically reduces the number of rows the database needs to examine.
  • Rewriting the query: Sometimes, a query is logically inefficient. Perhaps it’s fetching too much data, performing unnecessary joins, or using suboptimal functions. Analyze the query text from pg_stat_statements and consider alternative approaches. For instance, replacing SELECT COUNT(*) with SELECT COUNT(1) can sometimes yield minor performance gains, though the primary issue is usually the lack of an index or a full table scan.
  • Optimizing data types: Using appropriate data types can improve index efficiency and reduce storage. If a column used in a WHERE clause is TEXT but should be an INT or UUID, migrating it can speed up comparisons.
  • Analyzing table bloat: Over time, UPDATE and DELETE operations can leave dead tuples in your tables, increasing scan times. Running VACUUM ANALYZE (or VACUUM FULL for more aggressive cleanup, though this locks the table) can help. Supabase often handles this automatically, but manual intervention might be needed for specific high-churn tables.
  • Adjusting PostgreSQL configuration: While less common for individual query tuning, parameters like shared_buffers or work_mem can impact overall performance and how efficiently queries are executed. This is more of a system-wide tuning knob.
  • Materialized Views: For complex, frequently run queries that don’t require real-time data, a materialized view can pre-compute the results, offering significant speedups.

The one thing most people don’t realize is that pg_stat_statements tracks queries based on their structure, not their literal values. This means SELECT * FROM users WHERE id = 1 and SELECT * FROM users WHERE id = 2 are grouped together under a single entry in pg_stat_statements. This is incredibly useful for identifying patterns of slow queries, but it also means you can’t directly see the performance impact of a specific parameterized value without more advanced profiling or logging.

Once you’ve optimized your slowest queries, you’ll likely start noticing queries that are less expensive individually but are called an astronomical number of times, leading you to investigate caching strategies or batching operations.

Want structured learning?

Take the full Supabase course →