Tekton on OpenShift Pipelines isn’t just a CI/CD tool; it’s a Kubernetes-native framework that uses Custom Resources to define and execute build and deployment pipelines.

Let’s see it in action. Imagine you have a simple Node.js app. Here’s a Pipeline definition that clones the repo, runs npm install, and then npm test:

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: nodejs-build-test
spec:
  tasks:
    - name: clone-and-install
      taskRef:
        name: git-clone
      params:
        - name: url
          value: "https://github.com/my-org/my-nodejs-app.git"
        - name: subdirectory
          value: "app"
    - name: npm-install
      runAfter:
        - clone-and-install
      taskRef:
        name: npm-install
      params:
        - name: directory
          value: "app"
    - name: npm-test
      runAfter:
        - npm-install
      taskRef:
        name: npm-test
      params:
        - name: directory
          value: "app"

This Pipeline is a blueprint. To actually run it, you create a PipelineRun. This PipelineRun binds concrete values to the parameters and specifies where the results should go.

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: nodejs-build-test-run-1
spec:
  pipelineRef:
    name: nodejs-build-test
  workspaces:
    - name: shared-data
      persistentVolumeClaim:
        claimName: my-pipeline-pvc

The magic here is in the Custom Resources: Pipeline, Task, PipelineRun, and TaskRun. Tekton operators watch for these resources and orchestrate the execution of containers (called Steps within Tasks) in pods on your OpenShift cluster. The workspaces in the PipelineRun are crucial; they provide shared storage (like PersistentVolumeClaims) that allows tasks to pass artifacts between them. For example, the clone-and-install task might clone the code into the workspace, and the npm-install task then reads from that same workspace location.

The core problem Tekton solves is providing a declarative, GitOps-friendly way to define complex CI/CD workflows directly within Kubernetes. Instead of managing Jenkinsfiles or external CI servers, you manage Kubernetes objects. This means your pipelines are versionable, reviewable, and deployable like any other application component. The Tasks themselves are reusable building blocks. You can define a generic npm-install task once and use it across many different Pipelines. This promotes a "pipeline-as-code" philosophy where reusable components are shared and maintained.

The runAfter field in the Pipeline definition is key to defining dependencies. It ensures that tasks execute in the correct order. Without it, Tekton would try to run tasks in parallel if possible, which is often not what you want for a sequential build process. The taskRef points to a defined Task, which is a collection of Steps that perform a specific unit of work. These Steps are simply container images with commands to execute.

OpenShift Pipelines, powered by Tekton, integrates tightly with OpenShift’s security context constraints (SCCs) and service accounts. This means your pipelines run with the least privilege necessary, enhancing security. You can define specific service accounts for PipelineRuns to grant granular access to OpenShift resources.

One common point of confusion is how data is passed between tasks. While params are for input values, actual file artifacts are passed via workspaces. A task writes to a path within a named workspace, and a subsequent task can read from the same path in that workspace. This is how, for instance, build artifacts from one task can be consumed by a deployment task.

The PipelineRun also supports podTemplate for advanced customization of the pods that execute the tasks, allowing you to specify node selectors, tolerations, or even custom resource requests and limits for your build pods.

The next concept you’ll likely explore is how to trigger PipelineRuns automatically, perhaps from Git commits or image pushes, using Tekton Triggers.

Want structured learning?

Take the full Tekton course →