Your project on Supabase’s free tier can get paused or throttled, and it’s not just about hitting arbitrary limits; it’s how Supabase engineers the shared infrastructure to keep things running for everyone.
Here’s what that looks like in practice:
Let’s say you have a Supabase project. You’ve built a neat little app, and it’s starting to get some traction. Suddenly, users report it’s slow, or worse, completely unresponsive. Your database queries are timing out, your auth calls are failing, and the Supabase dashboard shows your project is "paused."
This isn’t a bug; it’s a feature of the free tier. Supabase uses a multi-tenant architecture. This means many projects share the same underlying PostgreSQL instances, API servers, and other resources. To prevent one "noisy neighbor" project from impacting everyone else, Supabase implements automatic pausing and throttling mechanisms.
Database Pausing
The most dramatic is database pausing. When your PostgreSQL database instance is deemed to be exceeding its allocated resources (CPU, memory, IOPS) for a sustained period, Supabase will pause it.
Diagnosis: You’ll see a prominent "Project Paused" banner in your Supabase dashboard. The Supabase CLI might also report connection errors.
Common Causes & Fixes:
-
Unindexed Large Queries: A common culprit is a
SELECTorUPDATEquery that scans a massive table without an appropriate index.- Diagnosis Command: Connect to your database via
psqlor the Supabase SQL Editor and runEXPLAIN ANALYZE YOUR_SLOW_QUERY;. Look forSeq Scanon large tables. - Fix: Add an index to the columns used in the
WHEREclause orORDER BYclause of your slow query. For example, if you haveSELECT * FROM users WHERE email = 'test@example.com';, and theuserstable is large, you’d run:CREATE INDEX idx_users_email ON users (email); - Why it works: Indexes create a sorted data structure that allows the database to quickly find specific rows without scanning the entire table, drastically reducing CPU and I/O.
- Diagnosis Command: Connect to your database via
-
Excessive Concurrent Connections: If your application opens many database connections and doesn’t close them properly, you can exhaust the connection pool.
- Diagnosis Command: In
psql, runSELECT count(*) FROM pg_stat_activity;. If this number is consistently high and approaching the free tier limit (often around 10-20 concurrent connections), this is an issue. - Fix: Implement connection pooling in your application. Libraries like
pg-poolfor Node.js orSQLAlchemyfor Python manage a pool of connections, reusing them instead of opening new ones for each request. Ensure youclient.release()connections when using most pooling libraries. - Why it works: Connection pooling reduces the overhead of establishing new connections and prevents the database from being overwhelmed by too many simultaneous, active connections.
- Diagnosis Command: In
-
Large Data Ingestion/Processing: Running a batch job that inserts or updates millions of rows can temporarily spike resource usage.
- Diagnosis: Monitor your database’s CPU and IOPS usage in the Supabase dashboard before and during your batch operations.
- Fix: If possible, break down large operations into smaller batches. Insert or update data in chunks of a few thousand rows at a time, with small delays between batches.
// Example in Node.js with pg-pool async function batchInsert(data) { const pool = new pg.Pool({ connectionString }); for (let i = 0; i < data.length; i += 1000) { const chunk = data.slice(i, i + 1000); const client = await pool.connect(); try { await client.query('BEGIN'); // Build and execute INSERT statement for the chunk await client.query(buildInsertQuery(chunk)); await client.query('COMMIT'); } catch (e) { await client.query('ROLLBACK'); throw e; } finally { client.release(); } await new Promise(resolve => setTimeout(resolve, 50)); // Small delay } await pool.end(); } - Why it works: Spreading the load over time prevents a single, massive spike that would trigger the auto-pausing mechanism.
-
Resource-Intensive Functions: Supabase Functions, especially those that perform heavy computation or I/O, can consume significant CPU and memory.
- Diagnosis: Check the logs and resource usage for your Supabase Functions within the Supabase dashboard.
- Fix: Optimize your function code for efficiency. Offload heavy processing to background jobs if possible. If the function genuinely needs more resources, consider upgrading your Supabase plan.
- Why it works: More efficient code uses fewer CPU cycles and less memory, staying within the free tier’s resource envelopes.
-
Unoptimized Realtime Subscriptions: A large number of active realtime subscriptions, especially if they are broadcasting frequently, can strain the backend.
- Diagnosis: Monitor the "Realtime" tab in your Supabase dashboard for the number of active subscriptions and message rates.
- Fix: Ensure your application only subscribes to the data it absolutely needs and unsubscribes when components are unmounted or no longer in use. Implement debouncing or throttling for rapid data changes that trigger broadcasts.
- Why it works: Reduces the load on the realtime server by processing and broadcasting fewer messages and managing fewer persistent connections.
API Throttling
Even if your database isn’t paused, your API requests (REST and GraphQL) can be throttled. This usually manifests as 429 Too Many Requests errors.
Diagnosis:
Your API calls start failing with 429 status codes. The Supabase dashboard might show high API request rates.
Common Causes & Fixes:
-
High Request Volume: Simply making too many API calls in a short period.
- Diagnosis: Observe the "API" tab in your Supabase dashboard for request counts and error rates.
- Fix: Implement caching on your client-side or a dedicated caching layer. Batching requests (if your API supports it) can also help. Optimize your frontend to fetch only necessary data.
- Why it works: Caching avoids redundant API calls, and batching consolidates multiple requests into one, reducing the overall load on the API servers.
-
Inefficient Data Fetching: Fetching entire tables or large objects when you only need a few fields.
- Diagnosis: Analyze your API calls. Are you using
?select=*when you only need?select=id,name? - Fix: Be specific with your
selectparameters in Supabase API calls.// Instead of: // fetch(`${url}/rest/v1/users?select=*`, options) // Use: fetch(`${url}/rest/v1/users?select=id,username,created_at`, options) - Why it works: Transferring less data over the network and processing less data on the server reduces the strain per request.
- Diagnosis: Analyze your API calls. Are you using
Auth Rate Limiting
Supabase Auth also has its own rate limits to prevent abuse.
Diagnosis: Users might experience errors when signing up, logging in, or resetting passwords, often with messages indicating rate limiting.
Common Causes & Fixes:
-
Brute-Force Attacks or Bots: High volumes of failed login attempts.
- Diagnosis: Check Supabase Auth logs for patterns of repeated failed sign-in attempts from the same IP addresses.
- Fix: Implement CAPTCHA on your login and signup forms. Supabase offers built-in protection that might need to be tuned or, for higher traffic, consider third-party bot detection services.
- Why it works: CAPTCHAs and bot detection differentiate human users from automated scripts, preventing abuse.
-
High User Onboarding Volume: A sudden surge of legitimate new users signing up simultaneously.
- Diagnosis: Correlate high auth error rates with marketing campaigns or viral growth events.
- Fix: While the free tier has limits, if this is a sustained need, upgrading your Supabase plan will increase these rate limits. For immediate relief, you could temporarily throttle your own signup flow.
- Why it works: Upgrading provides more capacity; self-throttling paces user acquisition to stay within the current limits.
After fixing your database pausing issues, the next thing you’ll likely encounter is API throttling on specific endpoints if your application logic still generates a high volume of requests.