Tekton is a Kubernetes-native CI/CD framework that offers a more declarative and Kubernetes-idiomatic way to build and deploy applications compared to traditional CI servers like Jenkins.
Here’s a quick look at Tekton in action. Imagine we want to build a Docker image and push it to a registry. In Tekton, this is broken down into reusable Tasks and Pipelines.
First, a Task to build a Docker image:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: buildah-build
spec:
params:
- name: IMAGE
description: The name and tag of the image to build
type: string
steps:
- name: build
image: quay.io/buildah/stable
script: |
buildah bud -t $(params.IMAGE) .
buildah push $(params.IMAGE) docker://docker.io/myregistry/$(params.IMAGE)
This Task uses buildah to build an image from the current directory and push it to docker.io/myregistry/$(params.IMAGE).
Now, a Pipeline that uses this Task:
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: docker-build-pipeline
spec:
tasks:
- name: build-and-push
taskRef:
name: buildah-build
params:
- name: IMAGE
value: "my-app:latest"
This Pipeline defines a single task, build-and-push, which references our buildah-build Task and passes the IMAGE parameter.
To run this, you’d create a PipelineRun:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: docker-build-run-1
spec:
pipelineRef:
name: docker-build-pipeline
When this PipelineRun is created, Tekton’s controllers spin up Kubernetes Pods to execute the steps defined in the Task, respecting Kubernetes scheduling and resource management.
The core problem Tekton solves is the monolithic, often imperative nature of traditional CI servers like Jenkins. Jenkins, with its plugin-driven architecture and job-based execution, can become a complex, difficult-to-manage system, especially at scale. Its configuration is often stored in XML or managed via imperative scripts within jobs. This makes versioning, testing, and replicating Jenkins configurations challenging.
Tekton, by contrast, treats CI/CD as code, leveraging Kubernetes Custom Resource Definitions (CRDs). Tasks, Pipelines, TaskRuns, and PipelineRuns are all Kubernetes objects. This means you can store them in Git, apply them with kubectl, and manage them like any other Kubernetes resource. This declarative approach provides a clear separation of concerns: Tasks are reusable, granular units of work, and Pipelines orchestrate these Tasks into a complete workflow.
Internally, Tekton relies on several key components. The Pipeline Controller watches PipelineRun objects and creates TaskRun objects based on the Pipeline definition. Each TaskRun then instructs the Task Controller to create Pods that execute the steps defined in the corresponding Task. These Pods often use sidecar containers for common functionalities like Git cloning or container building (e.g., git-init, buildah, kaniko). Secrets are managed via Kubernetes Secrets, and access to external services is typically handled through ServiceAccounts and ConfigMaps.
A significant advantage is Tekton’s flexibility in defining dependencies. Unlike Jenkins’ often linear or gated job chains, Tekton allows for complex DAGs (Directed Acyclic Graphs) of tasks. You can define when conditions on tasks to execute them only if certain parameters are met or if previous tasks succeeded. This fine-grained control over workflow execution is crucial for optimizing build times and managing complex deployment strategies.
The one thing that often surprises people is how Tekton handles data passing between tasks. It’s not just about simple parameter values. You can define results for a Task that produce outputs (like image digests or test reports), and these results can be consumed by subsequent tasks in a Pipeline. This is achieved through Kubernetes Volumes and ConfigMaps that Tekton manages behind the scenes. When a task produces a result, its value is written to a specific file path within a shared volume. The next task, if it references this result, is configured by Tekton to read from that same file path. This makes it straightforward to pass complex artifacts or data between steps without needing external storage for every intermediate output.
The next hurdle you’ll likely encounter is managing complex workspaces and shared volumes across tasks in a pipeline.