Vector’s remote_write sink lets you push Prometheus metrics from anywhere into a Prometheus Remote Write endpoint, like VictoriaMetrics, Thanos, or Cortex.
Here’s a Vector config to send Prometheus metrics to a remote_write endpoint:
[sources.my_metrics]
type = "prometheus_exporter"
address = "127.0.0.1:9000"
[sinks.my_remote_write]
type = "prometheus_remote_write"
inputs = ["my_metrics"]
endpoint = "http://victoriametrics:8428/api/v1/write"
This setup starts with a prometheus_exporter source listening on 127.0.0.1:9000. Any Prometheus metrics scraped from this address are then routed to the prometheus_remote_write sink, which dispatches them to the specified endpoint.
The prometheus_remote_write sink is incredibly versatile. It’s not just for Prometheus servers; it’s designed to act as a generic exporter for any Prometheus-compatible time-series database. This means you can pull metrics from applications that expose them in the Prometheus format, or even generate custom metrics within Vector itself, and send them to your chosen monitoring backend.
Internally, the sink serializes Vector’s internal metric representation into the Prometheus Remote Write protobuf format. It then batches these metrics and sends them over HTTP to the configured endpoint. The sink handles retries and backpressure automatically, ensuring that metrics are reliably delivered even under load or during temporary network issues.
You can fine-tune its behavior with several options. For instance, batch_size controls how many samples are sent in a single request (default is 5000). timeout sets the maximum time to wait for a response from the endpoint (default is 30 seconds). headers allows you to add custom HTTP headers, which is often necessary for authentication or specifying content types.
[sinks.my_remote_write]
type = "prometheus_remote_write"
inputs = ["my_metrics"]
endpoint = "http://cortex:9009/api/v1/push"
batch_size = 10000
timeout = "60s"
headers = { "X-Scope-OrgID" = "my-tenant-id" }
This configuration pushes 10,000 samples per batch, waits up to 60 seconds for a response, and includes an X-Scope-OrgID header for multi-tenancy in Cortex.
The real power comes when you combine this sink with Vector’s powerful transformation capabilities. You can use Vector’s transform or remap stages to enrich, filter, or modify metrics before they are sent to the remote write endpoint. For example, you might add labels based on environment variables or drop metrics that exceed a certain cardinality.
Consider a scenario where you have metrics from various sources and you want to add a consistent job label to all of them before sending them to your Prometheus backend.
[sources.app_metrics]
type = "prometheus_exporter"
address = "127.0.0.1:9001"
[sources.db_metrics]
type = "prometheus_exporter"
address = "127.0.0.1:9002"
[transforms.add_job_label]
type = "remap"
inputs = ["app_metrics", "db_metrics"]
source = '''
.labels.job = "my-application"
'''
[sinks.prometheus_out]
type = "prometheus_remote_write"
inputs = ["add_job_label"]
endpoint = "http://prometheus:9090/api/v1/write"
Here, metrics from both app_metrics and db_metrics are first processed by the add_job_label transform, which injects .labels.job = "my-application" into each metric’s label set. These enriched metrics are then sent to the prometheus_out sink.
What many users overlook is that the prometheus_remote_write sink doesn’t just accept metrics from prometheus_exporter sources. You can feed it any metric data that Vector can parse or generate, as long as you ensure it’s in a format that can be converted to Prometheus metrics. This includes metrics coming from other Vector sources like statsd, collectd, or even custom metrics generated via the event_counter transform. Vector’s internal metric representation is rich enough to be mapped to Prometheus labels and values. The key is that the prometheus_remote_write sink expects data structured with name, labels, and value fields, which it then translates to the protobuf format.
The next logical step after reliably pushing metrics is to explore how to ingest logs and traces alongside your metrics using Vector’s other sinks, such as loki or jaeger.