Supabase doesn’t actually have a built-in, user-configurable API rate limiter.
Let’s see what happens when you hammer a Supabase endpoint. Imagine you’ve got a simple todos table and you want to fetch all of them. Your frontend code might look something like this:
async function fetchTodos() {
const response = await fetch('https://your-project-ref.supabase.co/rest/v1/todos', {
headers: {
'apikey': 'your-anon-key',
'Authorization': 'Bearer your-anon-key' // Often the same as apikey for anon
}
});
const data = await response.json();
console.log(data);
}
// Rapidly call fetchTodos multiple times
for (let i = 0; i < 100; i++) {
fetchTodos();
}
If you run this, you won’t immediately see a "429 Too Many Requests" error from Supabase itself. Instead, what you’ll observe is that some requests will succeed, returning your todos data, while others will start failing with generic HTTP error codes, often a 500 Internal Server Error or a 502 Bad Gateway. This is because the underlying infrastructure that Supabase runs on is getting overloaded, not because Supabase has a specific rate-limiting policy you’ve hit.
The problem Supabase’s architecture solves here is providing a unified API for your database, authentication, storage, and real-time needs, abstracting away the complexity of managing these distributed services. What it doesn’t inherently provide out-of-the-box is a granular, per-user or per-endpoint rate-limiting mechanism that you can configure in your Supabase dashboard.
Internally, Supabase relies on services like PostgreSQL, GoTrue (for auth), PostgREST (for the REST API), and Realtime. These services are themselves hosted on infrastructure that has its own capacity limits. When you send a flood of requests, you’re not hitting a Supabase-specific X-RateLimit-Limit header; you’re overwhelming the connection pool to PostgreSQL, or overwhelming the PostgREST instance’s ability to process requests, or hitting limits on the load balancer. The observed errors are symptoms of this underlying infrastructure strain.
So, how do you actually protect your endpoints and prevent abuse or accidental overload? You implement rate limiting before the request even reaches Supabase. The most common and effective way to do this is at the edge, using a Content Delivery Network (CDN) or an API Gateway.
Let’s consider Cloudflare, a popular choice. You can configure rate limiting rules in your Cloudflare dashboard. For example, to protect your Supabase REST API endpoint (your-project-ref.supabase.co/rest/v1/*), you’d set up a rule like this:
Scenario: Protect the Supabase REST API from excessive requests.
Rule Configuration (Cloudflare):
- Field:
URI Path - Operator:
starts with - Value:
/rest/v1/ - Action:
Rate limit - Requests:
100 - Period:
1 minute - Block action:
Show a custom block page
This configuration tells Cloudflare: "If any single IP address makes more than 100 requests to any path starting with /rest/v1/ within any 1-minute window, block that IP address for a period and show them a custom page."
Why this works: Cloudflare sits in front of your Supabase project. It inspects incoming requests before they are forwarded to Supabase. When the configured threshold is met, Cloudflare intercepts further requests from the offending IP, returning a rate-limited response directly, thus preventing the Supabase infrastructure from ever seeing those excessive requests. This is a mechanical "firewall" at the network edge.
Another common strategy is to use an API Gateway like AWS API Gateway or Apigee. These services are specifically designed for managing APIs and offer robust rate-limiting features. You would configure your Supabase API as a backend service within the gateway and set up usage plans and API keys, defining throttling limits per key or per IP.
Scenario: Implement more granular, authenticated rate limiting.
API Gateway Configuration (Conceptual):
- API Key: Assign unique API keys to different clients or services.
- Usage Plan: Define a plan with a
Rate(e.g., 60 requests per minute) and aBurstcapacity (e.g., 100 requests). - Associate: Link the API key to the usage plan and configure the gateway to route requests for your Supabase endpoints through this plan.
Why this works: The API Gateway acts as a central point of control. It authenticates requests using API keys and enforces the throttling defined in the usage plan. If a client exceeds its allowance, the gateway rejects the request with a 429 Too Many Requests status code, again protecting the downstream Supabase services.
For real-time subscriptions, Supabase uses websockets. While not directly rate-limited by Supabase in the same way as HTTP requests, an excessive number of subscriptions or messages from a single client can still overload the system. The same edge solutions (Cloudflare, API Gateway) can often be configured to manage websocket traffic, though the specifics of their implementation might differ. For instance, Cloudflare’s "Argo Tunnel" or "Spectrum" services can help manage high-traffic websocket connections.
Many Supabase users also implement custom rate limiting within their own backend services that sit in front of Supabase. If you’re using a serverless function (like Supabase Functions or AWS Lambda) to proxy requests to Supabase, you can implement logic there.
Scenario: Rate limiting within a backend function.
Supabase Function Code (Conceptual - Node.js):
import { createClient } from '@supabase/supabase-js';
import { createClient as createEdgeClient } from '@supabase/auth-helpers-nextjs'; // Example
const supabaseUrl = 'https://your-project-ref.supabase.co';
const supabaseKey = process.env.SUPABASE_SERVICE_KEY; // Use service key for backend access
const supabase = createClient(supabaseUrl, supabaseKey);
// In-memory store for rate limiting (for demonstration, not production-ready)
const requestTimestamps = {};
const RATE_LIMIT_COUNT = 50;
const RATE_LIMIT_PERIOD = 60 * 1000; // 1 minute
export default async function handler(req, res) {
const ip = req.headers['x-forwarded-for'] || req.socket.remoteAddress;
const now = Date.now();
if (!requestTimestamps[ip]) {
requestTimestamps[ip] = [];
}
// Remove timestamps older than the rate limit period
requestTimestamps[ip] = requestTimestamps[ip].filter(ts => now - ts < RATE_LIMIT_PERIOD);
if (requestTimestamps[ip].length >= RATE_LIMIT_COUNT) {
return res.status(429).json({ error: 'Too Many Requests' });
}
requestTimestamps[ip].push(now);
// Forward request to Supabase
try {
// ... logic to process req and call supabase.from(...) ...
const { data, error } = await supabase.from('your_table').select('*');
if (error) throw error;
return res.status(200).json(data);
} catch (error) {
console.error(error);
return res.status(500).json({ error: 'Internal Server Error' });
}
}
Why this works: Your function acts as an intermediary. It tracks requests per IP (or per user ID if authenticated) in memory or a distributed cache like Redis. Before forwarding the request to Supabase, it checks if the limit has been reached. This gives you fine-grained control but requires you to manage the rate-limiting logic and potentially scale your functions.
The one thing most people don’t realize about Supabase’s architecture is that the "API" you interact with (PostgREST) is a stateless, horizontally scalable service. Its primary bottleneck isn’t usually CPU or memory on a single instance, but rather the connection limits to the PostgreSQL database it’s querying. When you hit what looks like rate limiting, you’re often hitting the maximum number of concurrent connections PostgreSQL can handle for your database, or the network bandwidth between PostgREST and PostgreSQL.
The next problem you’ll likely encounter is managing authentication and authorization effectively across multiple services when implementing edge-based rate limiting.