Tekton’s security and scalability aren’t just about locking down your pipelines; they’re fundamentally about distributed trust and resource elasticity in a world of ephemeral compute.

Let’s see Tekton in action, not as a diagram, but as actual workload execution. Imagine a developer pushing a change to a Git repository.

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: my-app-pr-build-deploy-123
spec:
  pipelineRef:
    name: build-and-deploy-app
  params:
  - name: IMAGE_NAME
    value: docker.io/myuser/my-app
  - name: IMAGE_TAG
    value: $(tasks.git-clone.results.commit)
  - name: DEPLOY_NAMESPACE
    value: production
  workspaces:
  - name: source-code
    persistentVolumeClaim:
      claimName: tekton-pvc-source
  - name: dockerconfig
    secret:
      secretName: my-docker-creds

This PipelineRun kicks off a Pipeline named build-and-deploy-app. It injects parameters like the target image name and tag, and crucially, it mounts a Workspace named dockerconfig which is backed by a Kubernetes Secret containing Docker registry credentials. This is the first hint at how Tekton handles sensitive information for secure, authenticated access to external services. The source-code workspace, backed by a PersistentVolumeClaim, ensures that the cloned source code is available across multiple Tasks within the pipeline.

The build-and-deploy-app pipeline might look like this:

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: build-and-deploy-app
spec:
  params:
  - name: IMAGE_NAME
  - name: IMAGE_TAG
  - name: DEPLOY_NAMESPACE
  tasks:
  - name: git-clone
    taskRef:
      name: git-clone
    params:
    - name: url
      value: https://github.com/myorg/my-app.git
    workspaces:
    - name: output
      workspace: source-code
    results:
    - name: commit
      description: The commit hash of the cloned repository
  - name: build-image
    runAfter:
    - git-clone
    taskRef:
      name: buildah
    params:
    - name: IMAGE
      value: $(params.IMAGE_NAME):$(params.IMAGE_TAG)
    workspaces:
    - name: source
      workspace: source-code
    - name: dockerconfig
      workspace: dockerconfig
  - name: deploy-app
    runAfter:
    - build-image
    taskRef:
      name: kubectl-apply
    params:
    - name: KUBECONFIG
      value: "" # injected via service account
    - name: APPLY_YAML
      value: |
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: my-app
          namespace: $(params.DEPLOY_NAMESPACE)
        spec:
          template:
            spec:
              containers:
              - name: my-app
                image: $(params.IMAGE_NAME):$(params.IMAGE_TAG)
    workspaces:
    - name: kubeconfig
      workspace: kubeconfig # This workspace would mount a secret with kubeconfig

Here, the git-clone task fetches code, and its commit result is used to tag the image built by the buildah task. The buildah task, in turn, uses the dockerconfig workspace to authenticate with the Docker registry. Finally, a kubectl-apply task deploys the application, referencing the newly built image. The DEPLOY_NAMESPACE parameter is dynamically set, showcasing how pipelines can adapt to different environments.

Tekton’s security posture is built on Kubernetes primitives. When you configure Workspaces to use Secrets, you’re leveraging Kubernetes’ robust RBAC and encryption-at-rest for sensitive data like API tokens, passwords, and SSH keys. These secrets are mounted as volumes into the Task pods, making them available to the build process without being exposed in logs or environment variables. For example, the dockerconfig workspace uses a secret of type kubernetes.io/dockerconfigjson, which contains the necessary credentials for Docker Hub or other registries.

Scaling Tekton involves understanding its distributed nature. Each TaskRun is a Kubernetes Pod, meaning Tekton inherently scales with your Kubernetes cluster’s capacity. The key is optimizing your Tasks and Pipelines to be efficient and stateless where possible. Resource requests and limits on your Task pods become critical. Setting appropriate requests.cpu, requests.memory, limits.cpu, and limits.memory prevents runaway TaskRuns from starving other workloads and ensures predictable execution times. For instance, a build-image task might need requests.cpu: "1000m" and limits.cpu: "2000m", while a simple kubectl-apply might only need requests.cpu: "100m" and limits.cpu: "200m".

Consider the ServiceAccount associated with your TaskRuns. By default, Tekton tasks run with the default service account in the namespace. To grant your pipelines access to cluster resources (like deploying to other namespaces or interacting with Kubernetes APIs), you must create a dedicated ServiceAccount with appropriate RBAC roles and bind it to your PipelineRun or TaskRun.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tekton-deployer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: tekton-deployer-role
rules:
- apiGroups: ["apps", "extensions", ""]
  resources: ["deployments", "pods", "services", "ingresses"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: tekton-deployer-binding
subjects:
- kind: ServiceAccount
  name: tekton-deployer
  namespace: default # Namespace where your PipelineRuns run
roleRef:
  kind: Role
  name: tekton-deployer-role
  apiGroup: rbac.authorization.k8s.io

Then, in your PipelineRun:

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
# ... other fields
spec:
  serviceAccountName: tekton-deployer
# ... other fields

This explicitly grants the tekton-deployer service account the ability to manage deployments and other resources.

A common pitfall for scaling is not managing the lifecycle of PipelineRuns and TaskRuns. Without proper cleanup, these resources can accumulate, consuming API server resources and making it difficult to track current and past executions. Implementing a TTL (Time-To-Live) controller or a custom cleanup job that deletes completed PipelineRuns and TaskRuns after a defined period (e.g., 7 days) is crucial for long-term operational health.

The most overlooked aspect of securing Tekton at scale is the granular control over what each Task can do, and how it does it. Instead of a single Task that does everything, break down complex operations into smaller, composable Tasks. Each Task should have a tightly scoped ServiceAccount and minimal Workspaces necessary for its operation. This principle of least privilege, applied at the Task level, dramatically reduces the blast radius if a specific Task is compromised or misbehaves. For example, a Task responsible for building an artifact should not have the same permissions as a Task responsible for deploying to production.

Finally, as you scale, consider how you’ll manage secrets for multiple environments (dev, staging, prod). Using separate Kubernetes Namespaces for each environment, each with its own set of secrets and dedicated ServiceAccounts for Tekton, provides strong isolation and prevents accidental cross-environment deployments or credential leakage.

The next evolution in your Tekton journey will likely involve exploring advanced GitOps integration patterns, where Tekton pipelines trigger declarative deployments based on Git state.

Want structured learning?

Take the full Tekton course →