Supabase logical replication slots aren’t just for streaming changes out; they’re the system’s conscience, remembering every transaction that ever happened until it’s explicitly acknowledged by a consumer.

Let’s watch a logical replication slot in action. Imagine we have a users table:

CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    username VARCHAR(50) UNIQUE NOT NULL,
    created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);

INSERT INTO users (username) VALUES ('alice');
INSERT INTO users (username) VALUES ('bob');

Now, we’ll create a logical replication slot. This is the "producer" side, telling PostgreSQL to start tracking changes for this specific consumer.

-- On your PostgreSQL instance (e.g., via psql or Supabase SQL Editor)
SELECT pg_create_logical_replication_slot('my_app_slot', 'pgoutput');

pgoutput is the output plugin, the standard for logical replication in PostgreSQL. It formats the change data in a way that other services can understand. Once this slot is created, PostgreSQL begins buffering all INSERT, UPDATE, and DELETE operations on tables that are part of the database associated with this slot.

To consume these changes, we need a "consumer." This could be another PostgreSQL database, a Kafka producer, or a custom application. For simplicity, let’s simulate a consumer using pg_recvlogical:

# On a machine that can connect to your Supabase Postgres instance
pg_recvlogical -d <your_supabase_db_name> -U <your_supabase_user> -h <your_supabase_host> -p <your_supabase_port> --slot my_app_slot --create-slot --plugin pgoutput

If we now make a change in our users table:

-- On your Supabase database
INSERT INTO users (username) VALUES ('charlie');
UPDATE users SET username = 'alice_updated' WHERE username = 'alice';
DELETE FROM users WHERE username = 'bob';

The pg_recvlogical command would start outputting messages like this (simplified for clarity):

BEGIN 135
table public.users: INSERT: id[integer]:4 username[text]:'charlie' created_at[timestamp with time zone]:'2023-10-27 10:30:00.123456+00'
COMMIT 135
BEGIN 136
table public.users: UPDATE: id[integer]:1 username[text]:'alice_updated' created_at[timestamp with time zone]:'2023-10-27 10:29:00.987654+00'
COMMIT 136
BEGIN 137
table public.users: DELETE: id[integer]:2 username[text]:'bob' created_at[timestamp with time zone]:'2023-10-27 10:29:30.543210+00'
COMMIT 137

Each message represents a transaction (BEGIN/COMMIT) and the data changes within it. The pg_recvlogical command, by receiving these messages, implicitly acknowledges their consumption. This acknowledgment is crucial: it tells PostgreSQL that the changes up to this point are safely processed and can be discarded from the WAL (Write-Ahead Log) and the replication slot’s internal state.

The mental model is this: PostgreSQL acts as a ledger. When you create a logical replication slot, you’re asking it to maintain a separate, queryable stream of ledger entries for you. The slot itself is a pointer in that ledger. As your consumer reads entries, it tells PostgreSQL, "I’ve read up to this point." PostgreSQL then advances the slot’s pointer, freeing up space in its internal logs. If the consumer stops reading, the slot’s pointer stays put, and PostgreSQL holds onto the WAL data indefinitely, waiting for the consumer to catch up. This is why unconsumed slots can lead to disk space exhaustion – they’re a persistent record of all changes that haven’t been "signed off" by a consumer.

The most surprising thing about logical replication slots is their inherent persistence. Unlike physical replication, which streams WAL segments, logical replication streams decoded changes. The pgoutput plugin transforms WAL records into a structured format. The slot doesn’t just track a position in the WAL; it tracks a position in the decoded change stream. This means the slot’s state is independent of the physical WAL segment lifecycle, allowing it to persist across WAL rollovers and even server restarts, as long as the slot itself isn’t dropped.

The next concept you’ll likely grapple with is how to handle consumers that go offline for extended periods, and the resulting disk pressure from unacknowledged changes.

Want structured learning?

Take the full Supabase course →