Terraform’s Helm provider is your secret weapon for managing Kubernetes applications, letting you define your Helm releases as code and treat them with the same rigor as your infrastructure.

Let’s see it in action. Imagine you want to deploy the nginx-ingress chart. Here’s how you’d define it in Terraform:

provider "kubernetes" {
  config_path = "~/.kube/config"
}

provider "helm" {
  kubernetes = {
    config_path = "~/.kube/config"
  }
}

resource "helm_release" "nginx" {
  name       = "nginx-ingress-controller"
  repository = "https://kubernetes.github.io/ingress-nginx"
  chart      = "ingress-nginx"
  namespace  = "ingress-nginx"
  version    = "4.4.3" # Pin to a specific version for stability

  # Override default chart values
  set {
    name  = "controller.replicaCount"
    value = "2"
  }

  set {
    name  = "controller.service.annotations.service\.beta\.kubernetes\.io/aws-load-balancer-type"
    value = "nlb"
  }

  # You can also use a values file
  # values = [
  #   file("${path.module}/nginx-values.yaml")
  # ]
}

When you run terraform apply, Terraform calls the Helm CLI under the hood. It fetches the specified chart from the repository, merges any set values or values files you’ve provided, and then performs a helm install or helm upgrade operation on your Kubernetes cluster. The helm_release resource then tracks the state of this Helm deployment, ensuring that subsequent terraform apply runs will reconcile the deployed release with your desired configuration.

This brings the declarative, idempotent power of Terraform to your application deployments. Instead of manually running helm install and then trying to track what you installed, you define your Helm releases alongside your Kubernetes infrastructure resources. Need to scale up the replicas? Change the controller.replicaCount in your .tf file and run terraform apply. Need to roll back? Change the version to a previous one and terraform apply again.

The core problem the Helm provider solves is bridging the gap between infrastructure-as-code and application-as-code. Without it, managing Helm deployments often involves imperative commands and manual tracking, which are antithetical to the IaC philosophy. The provider makes Helm releases first-class citizens in your Terraform state, allowing for robust dependency management, drift detection, and automated rollbacks.

You can manage complex application stacks by defining multiple helm_release resources, potentially with dependencies between them. For instance, a database Helm chart could be deployed first, and then an application chart that depends on that database could be deployed afterward, with Terraform managing the order.

A common pitfall is not pinning the version of your Helm chart. When you omit version, Terraform will fetch the latest available version during terraform apply. This can lead to unexpected upgrades and breaking changes if the chart maintainers release a new major version. Always specify a version to ensure predictable deployments and to control when you adopt new chart versions.

The set block is powerful for overriding individual values. Notice how dots in keys (like controller.replicaCount) are handled directly, but special characters like dots within Kubernetes annotations (e.g., service.beta.kubernetes.io) need to be escaped with a backslash (\.) within the name argument of the set block. This is because Terraform parses the set block’s name as a string literal, and the Helm CLI expects these escaped characters for nested Helm value structures.

The values argument allows you to pass an entire YAML file, which is perfect for large or complex configurations. You can use Terraform’s file() function to load these values from a file relative to your Terraform configuration.

The helm_release resource also supports create_namespace = true to automatically create the target namespace if it doesn’t exist, and atomic = true which ensures that if a release fails to install or upgrade, Helm will automatically roll back to the previous state.

When you’re dealing with Helm charts that have complex dependencies or require very specific configurations, you might find yourself writing lengthy set blocks. For such scenarios, leveraging the values argument with a separate YAML file is a cleaner and more maintainable approach, keeping your Terraform code focused on the overall structure and orchestration.

The next logical step after mastering Helm releases is to manage Kubernetes resources directly using the kubernetes provider alongside your Helm deployments, giving you fine-grained control over every aspect of your application’s presence in the cluster.

Want structured learning?

Take the full Terraform course →