Tekton Artifacts let you share data between pipeline tasks by writing it to a persistent storage location that subsequent tasks can read.

Let’s see it in action. Imagine a pipeline that first builds a Docker image and then deploys it. The imageDigest produced by the build task needs to be passed to the deploy task.

Here’s a simplified Pipeline definition:

apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
  name: build-and-deploy
spec:
  tasks:
    - name: build-image
      taskRef:
        name: buildah # Or your preferred builder
      params:
        - name: IMAGE
          value: "docker.io/myrepo/myimage:latest"
      results:
        - name: IMAGE_DIGEST
          type: string

    - name: deploy-image
      taskRef:
        name: deployer
      runAfter:
        - build-image
      params:
        - name: IMAGE_TO_DEPLOY
          value: "$(tasks.build-image.results.IMAGE_DIGEST)"

And here are the corresponding Tasks:

apiVersion: tekton.dev/v1
kind: Task
metadata:
  name: buildah
spec:
  params:
    - name: IMAGE
      type: string
  results:
    - name: IMAGE_DIGEST
      type: string
  steps:
    - name: build
      image: quay.io/buildah/buildah
      script: |
        #!/bin/bash
        # ... your buildah commands to build an image ...
        # Example:
        BUILD_OUTPUT=$(buildah from alpine)
        CONTAINER=$(echo "$BUILD_OUTPUT" | cut -f1 -d' ')
        buildah config --label BUILD_DATE="$(date -u +%Y-%m-%dT%H:%M:%SZ)" $CONTAINER
        buildah run $CONTAINER -- sh -c 'apk add --no-cache git'
        ROOTFS=$(buildah mount $CONTAINER)
        # ... copy application files into $ROOTFS ...
        buildah unmount $CONTAINER
        buildah commit $CONTAINER myimage:latest
        buildah push myimage:latest docker.io/myrepo/myimage:latest

        # Get the digest and set it as a result

        IMAGE_DIGEST=$(buildah inspect --format='{{.Digest}}' docker.io/myrepo/myimage:latest)

        echo "$IMAGE_DIGEST" > $(results.IMAGE_DIGEST.path)
apiVersion: tekton.dev/v1
kind: Task
metadata:
  name: deployer
spec:
  params:
    - name: IMAGE_TO_DEPLOY
      type: string
  steps:
    - name: deploy
      image: alpine
      script: |
        #!/bin/bash
        echo "Deploying image with digest: $(params.IMAGE_TO_DEPLOY)"
        # ... your deployment commands using the digest ...

The buildah task uses the results field to declare it will output an IMAGE_DIGEST. Inside the build step, echo "$IMAGE_DIGEST" > $(results.IMAGE_DIGEST.path) writes the computed digest to a file path provided by Tekton for this specific result.

The deployer task, in turn, declares a parameter IMAGE_TO_DEPLOY and its value is set using $(tasks.build-image.results.IMAGE_DIGEST). This tells Tekton to fetch the IMAGE_DIGEST result from the build-image task (which is the buildah task in this case) and pass it as a parameter to the deploy-image task.

This mechanism is fundamental for orchestrating multi-step processes where each step’s output is crucial for the next. It’s not just about passing strings; you can pass other data types too, and Tekton handles the serialization and deserialization behind the scenes.

The PipelineRun would then trigger this entire workflow:

apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
  name: build-and-deploy-run-1
spec:
  pipelineRef:
    name: build-and-deploy
  params:
    - name: IMAGE
      value: "docker.io/myrepo/myimage:latest" # This parameter is passed to the buildah task

Tekton’s artifact passing relies on a shared volume, usually an emptyDir or a PersistentVolumeClaim, that is mounted into each pod executing a task. When a task declares a result, Tekton automatically creates a directory within this volume for that result. The results.<result-name>.path variable points to a file within that directory. Writing to this file makes the data available. For subsequent tasks that consume this result as a parameter, Tekton reads the file from the shared volume and makes its content available via the $(params.<param-name>) variable.

When you define a Pipeline and have tasks depend on each other using runAfter, Tekton ensures that the results from the preceding task are fully written and available before the dependent task begins execution. This is managed by the Tekton Controller, which monitors the status of tasks and orchestrates the passing of these result artifacts. The PipelineRun object is the primary driver, and the Controller interprets its spec to create TaskRun objects, managing their lifecycle and interdependencies.

The most surprising thing about Tekton artifacts is that they are not truly "artifacts" in the sense of build outputs like Docker images or compiled binaries being stored externally. Instead, they are simply files written to a shared volume that Tekton manages. The "artifact" is the file content itself, living temporarily within the Kubernetes cluster’s storage during the PipelineRun.

The next concept you’ll likely encounter is how to handle more complex data structures than simple strings, or how to pass larger files as results, which often involves using PipelineResources or external object storage like S3.

Want structured learning?

Take the full Tekton course →