Supabase Pro plan’s "database size" isn’t just about how much data you store; it’s a proxy for the underlying compute resources allocated to your PostgreSQL instance.
Let’s see how this plays out with a real-world example. Imagine you’re building a social media app. Users upload profile pictures, post short videos, and send messages.
Here’s a simplified look at the users and posts tables in your Supabase project:
-- Table for user information
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
username VARCHAR(50) UNIQUE NOT NULL,
email VARCHAR(255) UNIQUE NOT NULL,
avatar_url TEXT,
created_at TIMESTAMP WITH TIME ZONE DEFAULT timezone('utc', now())
);
-- Table for user posts
CREATE TABLE posts (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
content TEXT NOT NULL,
media_url TEXT,
created_at TIMESTAMP WITH TIME ZONE DEFAULT timezone('utc', now())
);
When you choose a database size on the Pro plan, you’re essentially selecting a compute tier. For instance, selecting "2 GB Database Size" on the Pro plan might actually provision an instance with 2 vCPUs and 4 GB of RAM, but with a 2 GB storage limit for the actual PostgreSQL data files. If you go up to "8 GB Database Size," you might get 4 vCPUs and 8 GB RAM, with an 8 GB storage limit. The storage limit dictates how much data you can physically store in your database, but the compute resources determine how fast your queries run, how many connections your database can handle, and how well it performs under load.
The critical insight here is that as your data grows, you’re not just paying for more disk space. You’re also implicitly increasing the compute resources available to your database to handle that larger dataset effectively. Supabase abstracts this, but understanding the underlying PostgreSQL behavior is key. A database with 10 million rows and complex indexing will require significantly more RAM and CPU to query efficiently than a database with 10 thousand rows, even if both are stored on the same physical hardware.
When you upgrade your "database size" on Supabase Pro, you are effectively upgrading the underlying PostgreSQL instance’s capabilities. Let’s break down the compute options and how they impact performance:
- Small (e.g., 2 GB Database Size): This tier typically provides a baseline compute instance, often with around 1 vCPU and 2-4 GB of RAM. It’s suitable for projects in early development, small-scale applications, or those with minimal read/write operations. Think of a personal portfolio site or a small internal tool.
- Medium (e.g., 8 GB Database Size): This tier offers a more robust compute instance, likely with 2 vCPUs and 4-8 GB of RAM. This is a good step up for applications experiencing moderate user traffic, handling more complex queries, or requiring faster response times. A growing SaaS product with a few thousand active users might fit here.
- Large (e.g., 32 GB Database Size): This tier provides significant compute power, often featuring 4 vCPUs and 16-32 GB of RAM. It’s designed for applications with high traffic, demanding analytical queries, or a large number of concurrent users. A popular e-commerce platform or a high-traffic API backend would leverage this.
- Extra Large (e.g., 64 GB Database Size and beyond): These tiers offer substantial compute resources, scaled to handle enterprise-level workloads, massive datasets, and extreme concurrency.
The "database size" you select directly correlates to the provisioned compute resources (vCPUs and RAM) allocated to your PostgreSQL instance. It’s not just about the disk space occupied by your tables and indexes.
Consider a scenario where you have a products table with millions of entries and an orders table that’s also massive. You’re running queries like:
SELECT p.name, COUNT(o.id) as order_count
FROM products p
JOIN orders o ON p.id = o.product_id
WHERE p.category = 'Electronics'
GROUP BY p.name
ORDER BY order_count DESC
LIMIT 10;
If your database is undersized in terms of compute (RAM and vCPUs) for the amount of data and query complexity, PostgreSQL will resort to disk-based operations, leading to slow query execution. Upgrading the "database size" on Supabase Pro provisions a more powerful instance, allowing PostgreSQL to keep more data and indexes in RAM, significantly speeding up such operations.
The most surprising truth about Supabase Pro’s database size is that it’s not a hard cap on total data storage if you’re using features like Supabase Storage. Supabase Storage is a separate service that handles file uploads (images, videos, etc.) and stores them on object storage (like S3). Your PostgreSQL database primarily stores metadata about these files (e.g., the avatar_url or media_url in the examples above), not the files themselves. This means you can have a massive amount of user-uploaded content in Supabase Storage without directly impacting your PostgreSQL "database size" limit. However, the performance of querying that metadata will still be tied to your chosen PostgreSQL compute tier.
The next logical step after optimizing your database size and compute is to understand how to leverage read replicas for further performance gains.