Tekton’s structured logging means your pipeline events are JSON, not just plain text, making them machine-readable and queryable.
Let’s see Tekton logging in action. Imagine a simple pipeline that clones a Git repository and then runs a echo command.
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: logging-demo
spec:
tasks:
- name: clone-repo
taskSpec:
steps:
- name: clone
image: alpine/git
script: |
echo "Cloning repository..."
git clone https://github.com/tektoncd/pipeline.git /workspace/source
echo "Clone complete."
- name: run-command
taskSpec:
params:
- name: message
type: string
default: "Hello from Tekton!"
steps:
- name: echo-message
image: alpine
script: |
echo "$(params.message)"
When this pipeline runs, each step generates log entries. Instead of scattered echo statements, you get structured JSON. Here’s a snippet of what a clone-repo step’s logs might look like:
{
"level": "info",
"ts": "2023-10-27T10:30:00Z",
"caller": "steps/steps.go:165",
"msg": "Cloning repository...",
"pod_name": "logging-demo-clone-repo-abcdef-xyz-123456",
"task_run_name": "logging-demo-clone-repo-abcdef",
"pipeline_run_name": "logging-demo-run-ghijkl",
"namespace": "default",
"step_name": "clone",
"container_name": "step-clone"
}
{
"level": "info",
"ts": "2023-10-27T10:30:05Z",
"caller": "steps/steps.go:165",
"msg": "Clone complete.",
"pod_name": "logging-demo-clone-repo-abcdef-xyz-123456",
"task_run_name": "logging-demo-clone-repo-abcdef",
"pipeline_run_name": "logging-demo-run-ghijkl",
"namespace": "default",
"step_name": "clone",
"container_name": "step-clone"
}
Notice the fields: level, ts (timestamp), msg (the actual log message), and crucially, context like pod_name, task_run_name, pipeline_run_name, step_name, and container_name. This context is gold for observability.
The problem Tekton’s structured logging solves is making it easy to sift through the output of complex, multi-step pipelines. In traditional CI/CD, logs are often a single, monolithic stream of text. Finding specific events, correlating logs across different stages, or filtering by a particular task run becomes a manual, error-prone process. With JSON, you can use standard log aggregation tools (like Elasticsearch, Splunk, Loki) or even simple command-line tools (jq) to query and analyze these logs effectively.
Here’s how it works internally: Tekton’s kubelet integration (or the container runtime) captures stdout and stderr from each container (which Tekton uses for its steps). The Tekton controller then processes these streams. When a Tekton component logs, it’s not just writing a string; it’s formatting that string into a JSON object with predefined keys. These keys are standardized across all Tekton components and logs, ensuring consistency.
The primary lever you control is what gets logged by your pipeline steps. While Tekton provides the structured format, the content within the msg field is up to your script in each step. You can add echo statements in your shell scripts, and they will be captured and formatted. For more advanced scenarios, you might integrate specific libraries in custom-built container images that emit JSON directly to stdout or stderr, leveraging Tekton’s structured output.
You can see the effect of structured logging most clearly when you query it. For instance, if you have a log aggregation system, you could filter for all logs from a specific pipeline_run_name like this (using a hypothetical query language):
pipeline_run_name="my-specific-pipeline-run-12345"
Or, to find all errors within a particular task run:
task_run_name="my-task-run-abcde" AND level="error"
This ability to filter and aggregate based on specific fields is what transforms raw output into actionable insights.
The most surprising thing about Tekton’s structured logging is how it allows you to debug distributed systems by treating pipeline execution itself as a distributed system with observable events. You’re not just looking at the output of a single script; you’re observing the state transitions and interactions of multiple containers and processes, all tagged with rich context.
When you’re debugging a failing pipeline, you often want to see the exact command that was executed by a step, not just the echo output. Tekton doesn’t automatically log the full command line of every shell command run within a step’s script. You’ll need to explicitly add set -x to your shell scripts or echo the commands yourself to see them in the logs if you need that level of detail during debugging.