Supabase migrations can deadlock each other if multiple instances try to apply changes simultaneously, leading to lock_not_available errors and halted deployments.
The core issue is that Supabase uses PostgreSQL’s advisory locks to prevent concurrent schema modifications. When a migration starts, it acquires a lock. If another migration attempts to acquire the same lock before the first one releases it (either by completing or failing), the second migration will wait indefinitely, eventually timing out and failing.
Here’s a breakdown of common causes and how to fix them:
Cause 1: Multiple Supabase Projects/Environments Pointing to the Same Database
This is the most frequent culprit. You might have a staging environment and a production environment that, through a misconfiguration, are both attempting to run migrations against the same underlying Supabase project database. This isn’t a Supabase bug, but a fundamental database concurrency problem.
Diagnosis:
Check your Supabase project settings. Navigate to Project Settings -> Database -> Connection Pooling. Look at the Max Connections and Pool Mode. While not directly showing the conflict, it confirms the database is accessible. The real clue is in your deployment logs. Look for messages like:
ERROR: could not obtain lock on row in relation "pg_locks": timeout expired
DETAIL: Could not obtain row with exclusive lock on relation "pg_locks". Lock applied by another process.
Fix: Ensure each distinct deployment environment (e.g., development, staging, production) has its own dedicated Supabase project with its own dedicated database. Never point multiple independent deployment pipelines to the same Supabase database.
Why it works: Each Supabase project provides an isolated database instance. By separating them, you ensure that migration locks acquired in one environment have no bearing on another.
Cause 2: Manual Migration Runs Triggered on Multiple Instances
Even within a single Supabase project, if you have multiple developers or CI/CD pipelines that can manually trigger supabase migration up commands, and they do so concurrently without coordination, you’ll hit lock conflicts.
Diagnosis:
Review your CI/CD pipeline configurations and any manual deployment scripts. Look for instances where the supabase migration up command might be executed in parallel branches of a workflow or by different users at nearly the same time. Your deployment logs will show the lock_not_available error.
Fix: Implement a strict policy and mechanism for running migrations. This typically means:
- Centralized CI/CD: Designate a single CI/CD pipeline job responsible for running all migrations.
- Sequential Execution: Ensure migration steps are executed sequentially within that job, not in parallel.
- Locking within CI/CD: If your CI/CD platform supports it, use its built-in concurrency controls to ensure only one migration job runs at a time for a given environment. For example, in GitHub Actions, you can use
concurrencyin your workflow.
Example GitHub Actions snippet:
name: Deploy to Production
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
concurrency: production-deploy # Only one instance of this workflow runs at a time
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup Supabase CLI
uses: supabase/setup-cli@v1
with:
version: '1.x.x' # specify your CLI version
- name: Run Migrations
run: supabase migration up
env:
SUPABASE_ACCESS_TOKEN: ${{ secrets.SUPABASE_ACCESS_TOKEN }}
SUPABASE_URL: ${{ secrets.SUPABASE_URL }}
Why it works: By serializing migration execution through a single, controlled point (like a specific CI/CD job with concurrency limits), you guarantee that only one migration command is active at any given moment, preventing lock contention.
Cause 3: Failed Migrations Not Properly Cleaned Up
If a migration fails mid-execution, it might leave behind an advisory lock that isn’t automatically released, especially if the process was terminated abruptly. PostgreSQL advisory locks are not automatically cleaned up on session termination; they persist until explicitly released or until the database is restarted.
Diagnosis:
Connect to your Supabase database using psql or a GUI tool. Run the following query to see active advisory locks:
SELECT
pid,
locktype,
virtualxid,
transactionid,
classid,
objid,
objsubid,
mode,
granted,
fastpath,
user_id,
pg_blocking_pids(pid) AS blocking_pids,
query
FROM
pg_locks
WHERE
locktype = 'advisory'
AND NOT granted;
Look for locks held by processes related to migration tools or long-running transactions. The query column might show the ALTER TABLE or CREATE INDEX statement that’s stuck.
Fix:
Identify the pid (process ID) of the process holding the problematic advisory lock. You can then manually release it using pg_terminate_backend():
SELECT pg_terminate_backend(<pid_of_stuck_process>);
After terminating the backend, you’ll need to re-run your migrations.
Why it works: pg_terminate_backend forcefully kills the PostgreSQL process holding the lock, which typically causes PostgreSQL to roll back the current transaction and release any associated locks.
Cause 4: Long-Running "Migration" Tasks Mistaken for Schema Changes
Sometimes, tasks that look like migrations (e.g., data seeding, complex UPDATE statements that take a long time) are placed directly within migration files. If these tasks are very long-running, they can hold advisory locks for an extended period, blocking subsequent, legitimate schema migrations.
Diagnosis:
Examine the SQL content of your recent migration files. Look for large INSERT, UPDATE, or DELETE statements, especially those without explicit transaction control or batching. Check the pg_locks table as described in Cause 3. The query column is crucial here.
Fix: Refactor long-running data manipulation tasks out of your schema migration files.
- Batching: For large updates/inserts, process data in smaller batches (e.g., 1000 rows at a time) within a loop, committing after each batch.
- Separate Scripts: Create separate scripts for data seeding or complex data transformations. Run these after schema migrations are complete, and ideally, orchestrate them through a dedicated data management tool or a separate, carefully controlled deployment step.
- Transactional Integrity: Ensure data updates are within explicit transactions that are committed promptly.
Why it works: By breaking down long operations into smaller, manageable, and transactional chunks, or by running them separately, you minimize the duration for which advisory locks are held, allowing schema changes to proceed unimpeded.
Cause 5: Supabase CLI Version Inconsistencies or Bugs
While less common, older or specific versions of the Supabase CLI might have bugs related to lock handling or might not be perfectly synchronized with the latest PostgreSQL features or best practices.
Diagnosis: Check the version of the Supabase CLI being used across all your development and CI/CD environments.
supabase --version
Compare this to the latest stable release available. Review the Supabase CLI release notes for any known issues related to migrations or locking.
Fix: Update the Supabase CLI to the latest stable version in all environments.
# Example for updating on macOS/Linux
brew upgrade supabase/tap/supabase
# Or, if installed via npm
npm install -g supabase
Why it works: Newer versions of the CLI often include bug fixes and performance improvements that can resolve underlying issues with how migrations are managed and how locks are acquired and released.
Cause 6: Network Latency/Intermittent Connectivity During Lock Acquisition
In rare cases, if there’s significant network latency or intermittent connectivity between your migration runner and the Supabase database precisely during the advisory lock acquisition phase, the command might appear to hang or timeout from the client’s perspective, even if the lock is eventually acquired or another process is holding it.
Diagnosis:
Monitor network performance between your deployment environment and the Supabase project’s region. Look for high latency or packet loss in your deployment logs or using network diagnostic tools (ping, traceroute).
Fix:
- Optimize Deployment Location: Ensure your CI/CD runners or local development machines are geographically close to your Supabase project’s region to minimize latency.
- Increase Timeouts (with caution): In your migration tool or CI/CD configuration, you might be able to increase the timeout for acquiring locks. However, this is a workaround, not a fix, and can mask underlying issues.
- Stable Network: Ensure a stable and robust network connection for your deployment processes.
Why it works: Reducing network latency and ensuring stable connectivity improves the reliability of the initial lock acquisition handshake, making it less prone to timeouts or race conditions.
After resolving these issues, the next problem you’re likely to encounter is relation "your_table_name" does not exist if your migration order is incorrect or if a previous migration failed to create the necessary table.