UDP is often treated as the unreliable, fire-and-forget protocol, but in containerized environments, its behavior becomes a fascinating dance of network policies, kernel configurations, and even the underlying hardware.

Let’s see UDP in action within a Kubernetes cluster. Imagine two pods, sender-pod and receiver-pod, running in separate nodes. We want sender-pod to send UDP packets to receiver-pod on port 5353.

First, ensure your receiver-pod is listening on UDP port 5353. A simple Python script can do this:

import socket

UDP_IP = "0.0.0.0"
UDP_PORT = 5353

sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind((UDP_IP, UDP_PORT))

print(f"UDP receiver listening on {UDP_IP}:{UDP_PORT}")

while True:
    data, addr = sock.recvfrom(1024)
    print(f"Received message: {data.decode()} from {addr}")

Now, deploy this as a receiver-pod.

In sender-pod, we can use netcat or a similar tool to send a UDP packet:

echo "Hello from sender!" | nc -u <receiver-pod-ip> 5353

You’ll see the "Hello from sender!" message appear in the receiver-pod’s logs, along with the sender’s IP address. This IP address is crucial: it’s not the sender’s internal pod IP, but rather the IP address of the node that the packet egressed from, or the IP of the service if you’re sending to a Service.

The true magic, and often the source of confusion, lies in how the container runtime and orchestrator manage this seemingly simple UDP flow. Kubernetes, by default, routes traffic between pods using its own network overlay (like Calico, Flannel, or Cilium). When a UDP packet leaves sender-pod, it first hits the container runtime’s network stack, then the node’s kernel, potentially traverses the overlay network, arrives at the destination node’s kernel, and finally reaches the receiver-pod. Each hop introduces potential points of inspection, modification, or even blocking.

The core problem UDP solves in containers is enabling efficient, low-latency communication for applications that don’t require guaranteed delivery or ordered packets. Think DNS lookups (port 53), NTP synchronization, or real-time communication protocols like WebRTC. TCP’s overhead for connection establishment, acknowledgments, and retransmissions would be detrimental to these use cases.

When you configure a Kubernetes Service of type LoadBalancer or NodePort to expose a UDP port, Kubernetes interacts with the cloud provider’s load balancer or the node’s iptables/ipvs rules to direct incoming UDP traffic to the correct pods. For internal cluster communication, kube-proxy (or its equivalent in other CNI plugins) manages the iptables or ipvs rules that translate Service IPs to Pod IPs.

The surprising robustness of UDP in containers comes from the fact that, despite the layers of abstraction, the underlying Linux kernel’s networking stack is still heavily involved. The container network interface (CNI) plugin essentially hooks into this stack, creating virtual network interfaces and configuring routing. When a UDP packet is sent, it’s processed by the kernel’s UDP socket layer, which then hands it off to the IP layer for routing. The CNI plugin’s rules dictate how that routing happens across nodes.

A common point of failure, or at least a point of unexpected behavior, is when UDP packets are dropped due to network policies or firewall rules. Kubernetes Network Policies, for instance, can explicitly deny UDP traffic between pods, even if the application is configured to send it. Similarly, Security Groups on cloud provider VMs or firewalls on-premises can block UDP ports.

The most counterintuitive aspect for many is how UDP packet source IP addresses are handled. When a packet is sent from a pod to another pod on a different node, the source IP seen by the receiving pod isn’t the original sending pod’s IP. Instead, it’s often the IP address of the source node as seen by the destination node, or even the IP of the Kubernetes Service itself if you’re communicating via a Service. This is a consequence of Network Address Translation (NAT) performed by the CNI plugin or kube-proxy to make traffic appear to originate from the node or service, simplifying routing and policy enforcement.

The next hurdle you’ll likely encounter is understanding how UDP fragmentation is handled across different network segments within your containerized environment.

Want structured learning?

Take the full Udp course →