Supabase Storage is actually a thin wrapper around a cloud object store, typically AWS S3, but it can also be S3-compatible storage like MinIO. This means you can interact with your Supabase Storage buckets using the AWS SDK, treating them as if they were direct S3 buckets.

Let’s see how this works by uploading a file.

import boto3
from botocore.exceptions import NoCredentialsError

# Replace with your actual Supabase project details
SUPABASE_URL = "YOUR_SUPABASE_URL"  # e.g., "https://your-project.supabase.co"
SUPABASE_ANON_KEY = "YOUR_SUPABASE_ANON_KEY"
SUPABASE_STORAGE_URL = "YOUR_SUPABASE_STORAGE_URL" # e.g., "https://your-project.supabase.co/storage/v1"

# Supabase Storage uses S3-compatible endpoints.
# For Supabase's hosted storage, the endpoint is typically their own API gateway.
# For self-hosted MinIO, it would be your MinIO endpoint.
# We'll use the Supabase Storage URL as the endpoint here.
S3_ENDPOINT = SUPABASE_STORAGE_URL.replace('/storage/v1', '') # Remove /storage/v1 to get the base URL
S3_REGION = "us-east-1" # This can often be arbitrary for S3-compatible storage, but 'us-east-1' is common.

# Supabase uses your project's API keys for authentication.
# The anon key acts as a temporary, read-only credential for unauthenticated access.
# For authenticated uploads (which is more common), you'd use a service role key
# or a JWT obtained from a logged-in user. For simplicity here, we'll use the anon key,
# but be aware of its limitations for write operations without proper policies.
S3_ACCESS_KEY_ID = SUPABASE_ANON_KEY # This is a simplification; real auth is JWT-based
S3_SECRET_ACCESS_KEY = "YOUR_SERVICE_ROLE_SECRET_KEY" # Or use a user's JWT

# IMPORTANT SECURITY NOTE:
# Directly using SUPABASE_ANON_KEY for S3_ACCESS_KEY_ID and a static secret
# for S3_SECRET_ACCESS_KEY is NOT the secure way Supabase intends for uploads.
# Supabase's storage uses signed URLs or JWTs for secure, authenticated uploads.
# This example demonstrates the *possibility* of using the AWS SDK by faking
# credentials, but for actual production use, you should follow Supabase's
# official SDK or signed URL generation methods.

try:
    s3_client = boto3.client(
        "s3",
        endpoint_url=S3_ENDPOINT,
        aws_access_key_id=S3_ACCESS_KEY_ID,
        aws_secret_access_key=S3_SECRET_ACCESS_KEY,
        region_name=S3_REGION,
        # For S3-compatible storage that doesn't use HTTPS or has self-signed certs,
        # you might need:
        # use_ssl=False,
        # verify=False
    )

    bucket_name = "public" # Or any other bucket you have created in Supabase Storage
    file_path = "my_example_file.txt"
    file_content = "This is the content of my example file."

    response = s3_client.put_object(
        Bucket=bucket_name,
        Key=file_path,
        Body=file_content.encode('utf-8'),
        ContentType="text/plain"
    )

    print(f"File uploaded successfully! ETag: {response['ETag']}")

    # To download:
    # response = s3_client.get_object(Bucket=bucket_name, Key=file_path)
    # file_content_downloaded = response['Body'].read().decode('utf-8')
    # print(f"File content: {file_content_downloaded}")

except NoCredentialsError:
    print("Credentials not found. Make sure your SUPABASE_ANON_KEY and/or service role secret are set correctly.")
except Exception as e:
    print(f"An error occurred: {e}")

The core idea is that Supabase Storage presents an S3-compatible API. When you configure your AWS SDK client, you point it at Supabase’s storage endpoint and provide credentials that the underlying storage provider (like AWS S3 or MinIO) understands. For Supabase’s managed service, the endpoint_url is derived from your Supabase project URL, and the aws_access_key_id and aws_secret_access_key are effectively your Supabase API keys (though the mechanism of authentication is usually JWT-based when using Supabase’s own SDKs).

This means you can leverage the full power of the AWS SDK—list buckets, upload files, download files, generate pre-signed URLs, manage permissions (if the underlying S3 provider supports it and Supabase exposes it)—all through the familiar SDK interface, without needing to know the specifics of Supabase’s internal implementation.

The most surprising true thing about this is that you can often bypass Supabase’s dedicated SDKs entirely for storage operations if you’re comfortable with the AWS SDK and can correctly configure the endpoint and credentials. The boto3 library, for example, is incredibly flexible and can be pointed at any S3-compatible API.

Here’s how it looks internally when you interact with Supabase Storage via the AWS SDK:

  1. SDK Initialization: You create an s3_client instance from boto3. Crucially, you provide endpoint_url pointing to your Supabase storage API (e.g., https://your-project.supabase.co). The aws_access_key_id and aws_secret_access_key are typically your Supabase API keys, specifically the anon key for unauthenticated access (though write operations might fail without proper RLS policies) or a service_role key for administrative access.
  2. API Call: When you call a method like s3_client.put_object(), boto3 constructs an HTTP request.
  3. Request Routing: This HTTP request is sent to the endpoint_url you configured. For Supabase, this endpoint is an API gateway that routes requests to the actual object storage service (e.g., AWS S3, Google Cloud Storage, or a self-hosted MinIO instance).
  4. Authentication & Authorization: The request is intercepted by Supabase’s middleware. It validates the provided access_key_id and secret_access_key. For Supabase’s managed service, this often involves verifying a JWT derived from these keys against your project’s authentication system. It then checks Supabase’s Row Level Security (RLS) policies for the storage bucket and the operation being performed.
  5. Storage Provider Interaction: If authentication and authorization pass, Supabase forwards the request (or a transformed version of it) to the underlying object storage service. For example, if your Supabase project uses AWS S3, the request is made directly to your project’s S3 bucket.
  6. Response: The storage provider executes the operation and returns a response. This response is then processed by Supabase and sent back to your boto3 client.

The endpoint_url is the key to making this work. For Supabase’s hosted storage, you’ll use your project’s primary URL, but strip off any path segments like /api or /storage/v1 to get the base API endpoint. For example, if your Supabase project URL is https://abcdefghij.supabase.co, the storage endpoint might be https://abcdefghij.supabase.co. The region_name is often less critical for S3-compatible endpoints that aren’t strictly AWS, but providing a common one like us-east-1 can help avoid issues.

What most people don’t realize is that Supabase’s storage layer is designed to be highly abstract. While their SDKs handle JWT generation and signing for you, the underlying operations are standard S3 API calls. This means if you’re building a tool that needs to interact with multiple object storage providers, you can write your core logic using the AWS SDK and simply change the endpoint_url and credentials to target Supabase, MinIO, or even a direct AWS S3 bucket, all with the same code.

The next thing you’ll likely encounter is managing permissions, as directly using the anon key for writes is often restricted by default RLS policies.

Want structured learning?

Take the full Supabase course →