Tempo’s multi-tenancy isn’t about separating data storage per tenant; it’s about isolating trace retrieval and querying by injecting a tenant ID into the trace itself.

Let’s see it in action. Imagine two teams, "Frontend" and "Backend," each wanting their own view of traces. We’ll configure Tempo to recognize a tenant_id label.

Here’s a simplified tempo.yaml configuration snippet:

auth:
  # This is the simplest form: a static list of tenants.
  # In production, you'd likely use a more dynamic auth provider.
  static:
    tenants:
      - id: frontend
      - id: backend

ingester:
  # This is crucial: we're telling the ingester to look for a specific label
  # and use its value as the tenant ID.
  multi_tenant_traces:
    enabled: true
    tenant_id_label: tenant_id

querier:
  # Similar to the ingester, the querier needs to know how to filter by tenant.
  multi_tenant_traces:
    enabled: true
    tenant_id_label: tenant_id

# For demonstration, we'll also show how this might look in a distributor
# if you were using it directly.
distributor:
  multi_tenant_traces:
    enabled: true
    tenant_id_label: tenant_id

Now, when a trace is sent to Tempo, we’ll attach this tenant_id label. Here’s how you might do it with a Prometheus client library:

import (
	"context"
	"net/http"
	"time"

	"go.opentelemetry.io/otel"
	"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
	"go.opentelemetry.io/otel/sdk/resource"
	sdktrace "go.opentelemetry.io/otel/sdk/trace"
	semconv "go.opentelemetry.io/otel/semconv/v1.12.0"
	"google.golang.org/grpc"
	"google.golang.org/grpc/credentials/insecure"
)

func initTracerProvider(ctx context.Context, tenantID string) (*sdktrace.TracerProvider, error) {
	// Replace with your Tempo OTLP endpoint
	endpoint := "localhost:4317"

	// Create an OTLP exporter
	// We're using insecure for simplicity; use TLS in production.
	conn, err := grpc.DialContext(ctx, endpoint, grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithBlock())
	if err != nil {
		return nil, err
	}

	traceClient := otlptracegrpc.NewClient(
		otlptracegrpc.WithGRPCConn(conn),
	)

	exporter, err := otlptracegrpc.NewExporter(context.Background(), traceClient)
	if err != nil {
		return nil, err
	}

	// Configure the resource with the tenant ID label
	res, err := resource.Merge(
		resource.Default(),
		resource.NewWithAttributes(
			semconv.SchemaURL,
			semconv.ServiceNameKey.String("my-service"),
			semconv.String("tenant_id", tenantID), // <--- This is the key!
		),
	)
	if err != nil {
		return nil, err
	}

	// Create a tracer provider with the exporter and resource
	tp := sdktrace.NewTracerProvider(
		sdktrace.WithBatcher(exporter),
		sdktrace.WithResource(res),
	)
	return tp, nil
}

func main() {
	ctx := context.Background()

	// For the frontend team
	frontendTP, err := initTracerProvider(ctx, "frontend")
	if err != nil {
		// Handle error
	}
	defer frontendTP.Shutdown(ctx)
	otel.SetTracerProvider(frontendTP)
	// ... start tracing ...

	// For the backend team (on a different service or instance)
	backendTP, err := initTracerProvider(ctx, "backend")
	if err != nil {
		// Handle error
	}
	defer backendTP.Shutdown(ctx)
	otel.SetTracerProvider(backendTP)
	// ... start tracing ...
}

When you then query Tempo through Grafana, you’ll see a "Tenant ID" dropdown. Selecting "frontend" will only show traces that were ingested with tenant_id: frontend.

The system works by treating the tenant_id_label as a primary filter. When Tempo receives a trace, it extracts the value of this label and associates it with the trace data. During querying, this tenant_id is used to scope the search. Tempo doesn’t store separate databases per tenant; it uses this label to partition access to the same underlying storage. This is why it’s efficient and doesn’t require complex infrastructure management for each tenant.

The most surprising aspect is that Tempo doesn’t enforce strict authentication per tenant at the ingestion or query API level by default. It relies on the client correctly sending the tenant_id label. If a client forgets or sends an incorrect tenant_id, those traces might end up in the wrong tenant’s view, or a "default" tenant if one is configured and no other filtering is in place. This means your application code generating traces is the primary gatekeeper for tenant isolation.

Once you have multi-tenancy set up, the next logical step is to explore how Grafana’s authentication and authorization can then be mapped to these Tempo tenant IDs, ensuring users only see the tenants they’re supposed to.

Want structured learning?

Take the full Tempo course →