Terminating TLS at the Kubernetes Ingress is the default and most common way to handle HTTPS traffic, but it’s not where the encryption stops.

Let’s watch this in action. Imagine we have a simple Nginx Ingress controller and a backend service.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
  tls:
  - hosts:
    - myapp.example.com
    secretName: my-tls-secret
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-backend-service
            port:
              number: 80

Here, my-tls-secret is a Kubernetes Secret containing your TLS certificate and private key. When a client connects to myapp.example.com over HTTPS, the Ingress controller intercepts the request. It uses the certificate from my-tls-secret to perform the TLS handshake with the client. Once the handshake is complete, the Ingress controller decrypts the incoming HTTPS traffic.

The crucial part is what happens next. The decrypted HTTP traffic is then forwarded to your my-backend-service on port 80. This means the connection between the Ingress controller and your backend service is unencrypted HTTP.

This setup solves a few key problems. Firstly, it centralizes TLS certificate management. You only need to update certificates in one place (the Kubernetes Secret) rather than on every individual pod. Secondly, it offloads the computationally expensive TLS handshake from your application pods. Your backend services can focus on serving application logic without worrying about encryption/decryption overhead.

The mental model here is a traffic cop at the edge of your cluster. It speaks fluent HTTPS to the outside world, but once it’s inside the cluster, it speaks plain HTTP to the internal services. The "Ingress controller" is the cop, the "client" is the person outside, and the "backend service" is the destination. The "TLS secret" is the cop’s identification and authorization credentials.

The default configuration for most Ingress controllers, including Nginx, is to not re-encrypt traffic between the Ingress controller and the backend pods. This is often a performance optimization. However, if you need end-to-end encryption, you’d typically configure your backend services to also listen on HTTPS and have the Ingress controller establish a TLS connection to them. This requires extra configuration on both the Ingress and the backend deployment (e.g., setting up a separate TLS secret for the backend pods or using mutual TLS).

When you’re debugging why your Ingress isn’t serving traffic over HTTPS, the most common culprit is a misconfiguration in the Ingress resource itself or the referenced Secret.

This pattern is also known as "TLS termination" because the TLS connection ends at the Ingress controller.

The next logical step after successfully terminating TLS at the Ingress is to consider how to secure traffic between your Ingress controller and your backend services.

Want structured learning?

Take the full Tls-ssl course →