You can send the same event to multiple destinations simultaneously without duplicating the event in your pipeline.

Let’s say you’ve got a single log event coming in, and you want to send it to both a json file and a syslog server. Instead of having two separate log sources pointing to different destinations (which would send copies of the event), you can use a route transform to split it.

Here’s a simple example:

[[sources]]
  id = "my_log_source"
  type = "log"
  paths = ["/var/log/myapp.log"]

[[transforms.route]]
  id = "split_to_json_and_syslog"
  type = "route"
  inputs = ["my_log_source"]
  # This is the core of the routing: a list of conditions and their corresponding sinks.
  # If an event matches a condition, it's sent to that sink.
  # If an event matches multiple conditions, it's sent to ALL matching sinks.
  routes = [
    { condition = "true", sinks = ["my_json_sink"] },
    { condition = "true", sinks = ["my_syslog_sink"] }
  ]

[[sinks.json]]
  id = "my_json_sink"
  type = "file"
  path = "/var/log/processed/myapp.json"
  encoding = "json"

[[sinks.syslog]]
  id = "my_syslog_sink"
  type = "syslog"
  address = "192.168.1.100:514"
  protocol = "tcp"

In this setup, my_log_source produces events. The split_to_json_and_syslog transform takes these events and, because the condition = "true" is always met, sends each incoming event to both my_json_sink and my_syslog_sink. The original event is not duplicated; it’s simply fanned out.

The power here lies in the condition field. You can use Squirrelly templating to make sophisticated routing decisions. For instance, you might only want to send error level logs to a dedicated error monitoring sink, while sending info and debug logs to a general-purpose file.

Consider this scenario: you have logs from multiple applications, and you want to route app1 logs to one place, app2 logs to another, and anything else to a general archive.

[[sources]]
  id = "all_app_logs"
  type = "journald" # Example: reading from systemd journal

[[transforms.route]]
  id = "app_router"
  type = "route"
  inputs = ["all_app_logs"]
  routes = [

    { condition = '{{ .Message.APP_NAME }} == "app1"', sinks = ["app1_sink"] },


    { condition = '{{ .Message.APP_NAME }} == "app2"', sinks = ["app2_sink"] },

    { condition = "true", sinks = ["general_archive_sink"] } # Catch-all
  ]

[[sinks.file]]
  id = "app1_sink"
  path = "/var/log/app1/output.log"

[[sinks.file]]
  id = "app2_sink"
  path = "/var/log/app2/output.log"

[[sinks.file]]
  id = "general_archive_sink"
  path = "/var/log/archive/all_other_app_logs.log"

Here, each event from all_app_logs is evaluated against the routes. If APP_NAME is "app1", it goes to app1_sink. If it’s "app2", it goes to app2_sink. If neither of those conditions is met, the true condition at the end sends it to general_archive_sink. Crucially, an event only goes to the sinks for which its condition evaluates to true. It doesn’t get sent to all of them by default.

When you define multiple routes entries, the transform iterates through them in order. An event is sent to the sinks associated with the first route whose condition evaluates to true. If you want an event to go to multiple destinations based on different criteria, you need to ensure each destination is explicitly listed in a route with a matching condition. If you want an event to go to all destinations that match its criteria, you would list those destinations in separate route entries, each with a condition that the event satisfies. For example, to send an event to sink_A if field == "value_a" AND to sink_B if field == "value_a", you’d have two separate route entries both with condition = '{{ .Message.field }} == "value_a"', one with sinks = ["sink_A"] and the other with sinks = ["sink_B"]. The route transform itself doesn’t automatically fan out to all matching sinks; you define the fan-out by how you structure the routes list.

The condition expression is evaluated against the entire event object. For structured data, you’ll typically access fields using dot notation, like {{ .Message.fieldName }}. If your data is unstructured (e.g., plain text logs), you might use functions provided by the templating engine to parse or search within the raw string. The true condition is a common pattern for a default or catch-all route.

This routing capability is extremely useful for implementing complex observability strategies. You might send all logs to a cheap, high-volume storage sink for long-term retention, while selectively sending critical error and alert events to a more expensive, real-time alerting system. It allows you to decouple the source of your data from its ultimate destinations, enabling flexible and efficient data flow management.

The route transform itself does not perform any modification on the event data; it solely dictates where the event goes. If you need to change the event before routing it, you would use a transform component (like modify, template, etc.) before the route transform in your pipeline. The output of that transform would then be the input to your route transform.

The most surprising thing about the route transform is that it doesn’t inherently "fan out" to all matching conditions. Each route entry is evaluated independently. If you want an event to go to sink_X and sink_Y because field == "value", you need two route entries, each with the condition '{{ .Message.field }} == "value"' and specifying one of the sinks in its sinks list.

This mechanism is fundamental to building pipelines that can intelligently distribute data based on its content without needing to duplicate data sources or rely on external routing logic.

Next, you’ll likely want to explore how to enrich events before routing them to add context that can then be used in your routing conditions.

Want structured learning?

Take the full Vector course →