Tekton’s PersistentVolumeClaim (PVC) workspaces are the unsung heroes of reproducible and stateful CI/CD pipelines, allowing tasks to share and persist data across pipeline runs.

Let’s see this in action. Imagine a pipeline that needs to build a Go application, then upload the resulting binary to an artifact repository. The build step needs the source code, and the upload step needs the compiled binary.

Here’s a simplified Pipeline definition:

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: build-and-upload
spec:
  workspaces:
    - name: shared-data
      persistentVolumeClaim:
        claimName: my-pipeline-pvc # This PVC will be created separately
  tasks:
    - name: build-go-app
      taskRef:
        name: build-go
      workspaces:
        - name: data
          workspace: shared-data
      params:
        - name: GO_VERSION
          value: "1.19"

    - name: upload-binary
      taskRef:
        name: upload-artifact
      runAfter:
        - build-go-app
      workspaces:
        - name: data
          workspace: shared-data
      params:
        - name: ARTIFACT_NAME
          value: "my-app-binary"

And the associated Task definitions:

Task: build-go

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: build-go
spec:
  workspaces:
    - name: data # This name matches the workspace name in the Pipeline task
  steps:
    - name: build
      image: golang:1.19-alpine
      workingDir: $(workspaces.data.path)
      script: |
        #!/usr/bin/env bash
        echo "Cloning repository..."
        git clone https://github.com/myuser/my-go-app.git .
        echo "Building Go application..."
        go build -o my-app-binary ./cmd/myapp/
        echo "Build complete. Binary saved to $(workspaces.data.path)/my-app-binary"

Task: upload-artifact

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: upload-artifact
spec:
  workspaces:
    - name: data
  params:
    - name: ARTIFACT_NAME
      description: Name of the artifact to upload
  steps:
    - name: upload
      image: alpine:latest
      workingDir: $(workspaces.data.path)
      script: |
        #!/usr/bin/env bash
        echo "Checking for artifact $(params.ARTIFACT_NAME)..."
        if [ -f "$(workspaces.data.path)/$(params.ARTIFACT_NAME)" ]; then
          echo "Artifact found! Uploading $(params.ARTIFACT_NAME)..."
          # In a real scenario, this would be an actual upload command
          # e.g., curl -F "file=@$(workspaces.data.path)/$(params.ARTIFACT_NAME)" http://your-artifact-repo/upload
          echo "Simulating upload of $(params.ARTIFACT_NAME)"
        else
          echo "Error: Artifact $(params.ARTIFACT_NAME) not found in workspace!"
          exit 1
        fi

Before we can run this Pipeline, we need a PersistentVolumeClaim named my-pipeline-pvc.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pipeline-pvc
spec:
  accessModes:
    - ReadWriteOnce # Or ReadWriteMany if your underlying storage supports it and you need multiple pods to write concurrently
  resources:
    requests:
      storage: 5Gi # Request 5 Gigabytes of storage

When you run the Pipeline, Tekton creates a Pod for each Task. The shared-data workspace is defined at the Pipeline level, meaning it’s available to all Tasks within that Pipeline. Tekton mounts the PersistentVolumeClaim ( my-pipeline-pvc) into each task’s pod at the path specified by $(workspaces.data.path).

In the build-go-app task, the Go source code is cloned directly into this mounted path. When the go build command runs, it outputs my-app-binary into the same directory. Because this directory is backed by the PersistentVolumeClaim, the my-app-binary file persists even after the build-go-app task’s pod terminates.

The upload-binary task, which runs after build-go-app and also has access to the shared-data workspace, can then find and access my-app-binary in its $(workspaces.data.path). This allows the second task to operate on data produced by the first.

The core problem PVC workspaces solve is managing state and data sharing between tasks in a Tekton pipeline. Without them, each task would start with a clean slate, and passing artifacts would require complex inter-task communication or relying on external, potentially less reproducible, mechanisms. PVC workspaces provide a Kubernetes-native way to achieve this persistence and sharing, treating the workspace as a shared filesystem volume.

The accessModes on your PersistentVolumeClaim are critical. ReadWriteOnce means the volume can be mounted as read-write by a single node. If your pipeline tasks are scheduled on different nodes and need to write concurrently, you’ll need a storage solution that supports ReadWriteMany (like NFS or certain cloud provider volumes) and configure your PVC accordingly. If you only need to read from a volume that’s written by a single task, ReadOnlyMany might also be an option, but ReadWriteOnce is the most common for simple artifact passing.

A common pitfall is forgetting to create the PersistentVolumeClaim before running the Pipeline. Tekton will not create the PVC for you; it expects it to exist. If the PVC is missing, your pipeline run will fail with an error indicating that the specified PVC could not be found.

If you’re using ReadWriteMany and your Tasks are running on different nodes, you need to be mindful of potential race conditions if multiple tasks are writing to the same files simultaneously. Tekton itself doesn’t provide distributed locking for workspaces; that’s a concern for your application logic or requires specific storage features.

The next logical step after mastering shared data is understanding how to manage secrets and configuration securely within your Tekton pipelines, often using Secrets as a specialized type of workspace.

Want structured learning?

Take the full Tekton course →