SQLite’s ability to handle multiple processes accessing the same database file simultaneously is surprisingly robust, but it’s not magic; it relies on a sophisticated locking mechanism that can be a real gotcha if you don’t understand it.

Let’s see it in action. Imagine two Python scripts, writer.py and reader.py, both trying to interact with my_database.db.

# writer.py
import sqlite3
import time

conn = sqlite3.connect('my_database.db')
cursor = conn.cursor()

try:
    print("Writer: Attempting to acquire lock...")
    cursor.execute("BEGIN EXCLUSIVE TRANSACTION;") # Acquire exclusive lock
    print("Writer: Lock acquired. Writing...")
    cursor.execute("INSERT INTO logs (message) VALUES (?)", ('Hello from writer!',))
    conn.commit()
    print("Writer: Wrote and committed. Releasing lock.")
    time.sleep(5) # Hold the lock for a bit
except sqlite3.OperationalError as e:
    print(f"Writer: Error - {e}")
finally:
    conn.close()
    print("Writer: Connection closed.")
# reader.py
import sqlite3
import time

conn = sqlite3.connect('my_database.db')
cursor = conn.cursor()

try:
    print("Reader: Attempting to read...")
    # By default, SQLite uses transactions that are at least READ UNCOMMITTED,
    # but for consistency, let's explicitly start a read transaction.
    cursor.execute("BEGIN TRANSACTION;")
    cursor.execute("SELECT COUNT(*) FROM logs;")
    count = cursor.fetchone()[0]
    print(f"Reader: Current log count: {count}")
    conn.commit() # Release read lock
    print("Reader: Read and committed.")
except sqlite3.OperationalError as e:
    print(f"Reader: Error - {e}")
finally:
    conn.close()
    print("Reader: Connection closed.")

If you run writer.py first, it will acquire an exclusive lock, insert a row, commit, and then pause for 5 seconds. While it’s paused, if you run reader.py, it will wait patiently until the writer releases its lock. If you try to run another writer script, it will also wait. This waiting is SQLite’s way of preventing data corruption.

SQLite uses a file-level locking mechanism. When a process needs to write, it tries to acquire an "exclusive" lock on the database file (my_database.db). This lock prevents any other process from reading or writing until it’s released. When a process needs to read, it can acquire a "shared" lock. Multiple processes can hold shared locks simultaneously, allowing concurrent reads. However, a shared lock cannot be acquired if an exclusive lock is already held, and an exclusive lock cannot be acquired if any shared locks are held.

The core problem SQLite solves here is maintaining data integrity across concurrent operations. Without a locking strategy, if two processes tried to write to the same table at the exact same moment, the database file could end up in an inconsistent, corrupted state. SQLite’s locking ensures that only one writer modifies the data at a time, and readers either see a consistent snapshot or wait for one.

The different transaction isolation levels (READ UNCOMMITTED, SERIALIZED, READ COMMITTED, IMMEDIATE, EXCLUSIVE) directly map to how SQLite acquires and holds locks. BEGIN EXCLUSIVE TRANSACTION; is the most restrictive, acquiring the exclusive lock immediately and holding it until commit or rollback. BEGIN IMMEDIATE TRANSACTION; tries to acquire the exclusive lock but will fail immediately if it can’t, without waiting. BEGIN TRANSACTION; (without IMMEDIATE or EXCLUSIVE) defaults to SERIALIZED in newer SQLite versions, which is the safest as it ensures transactions are executed as if they ran one after another, though it might involve more waiting.

The PRAGMA busy_timeout; is crucial for managing the waiting. If you don’t set this, a process trying to acquire a lock that’s already held will immediately return an OperationalError: database is locked. With PRAGMA busy_timeout = 5000; (in milliseconds), the process will wait up to 5 seconds before giving up. This is often sufficient for transient lock contention.

Here’s how you’d set the busy timeout in your Python scripts:

# writer.py (and reader.py)
import sqlite3

conn = sqlite3.connect('my_database.db')
conn.execute("PRAGMA busy_timeout = 5000;") # Wait up to 5 seconds
cursor = conn.cursor()
# ... rest of your transaction logic

When reader.py runs while writer.py holds an exclusive lock, reader.py will execute BEGIN TRANSACTION;. SQLite will try to acquire a shared lock. Since an exclusive lock is held by the writer, the reader will wait, up to the busy_timeout. Once the writer commits and releases its exclusive lock, the reader can acquire its shared lock, read the data, and commit.

It’s important to note that SQLite’s locking is file-based. This means that if you have multiple processes on different machines trying to access the same network file share where the SQLite database resides, you can run into serious issues. Network file systems often don’t implement the necessary file locking primitives correctly or reliably for SQLite. For multi-process access, especially in a networked environment, a client-server database like PostgreSQL or MySQL is a much better fit.

The one detail that trips many people up is that conn.commit() releases the lock. If you have a long-running read operation that needs to see a consistent state throughout its execution, you must keep the transaction open (i.e., don’t commit until you’re done reading). Similarly, for writes, the exclusive lock is held from BEGIN TRANSACTION until COMMIT or ROLLBACK.

If you’re seeing database is locked errors, the first thing to check is if you’ve set busy_timeout. If you have, then you might have genuinely long-running transactions or a high rate of contention that exceeds your timeout. Looking at the PRAGMA lock_status; can sometimes provide insight, though it’s more for debugging than operational use.

The next hurdle you’ll likely encounter is managing the lifecycle of database connections and ensuring they are properly closed, especially in the face of errors, to avoid leaving locks held indefinitely.

Want structured learning?

Take the full Sqlite course →