UDP can actually be faster than TCP in many real-world scenarios, not just in theory.
Let’s see it in action. We’ll use iperf3 to benchmark UDP throughput and latency between two machines.
First, on the server:
iperf3 -s -u
Then, on the client, to send 10 seconds of UDP traffic at a target bandwidth of 1 Gbit/s:
iperf3 -c <server_ip> -u -b 1G -t 10
The output will show us the actual bandwidth achieved, the jitter, and the packet loss. Notice how iperf3 reports these metrics specifically for UDP.
This is useful for applications like real-time video streaming, online gaming, or VoIP. Why? Because these applications can tolerate a small amount of packet loss or reordering in exchange for lower latency. TCP, with its built-in reliability mechanisms (acknowledgments, retransmissions, flow control), adds overhead that can introduce significant delays, making it unsuitable for such use cases.
UDP’s core design is simple: "fire and forget." It sends datagrams without establishing a connection, guaranteeing delivery, or ensuring order. This lack of overhead is its superpower for speed.
The key parameters you’re controlling here are bandwidth (-b) and time (-t).
- Bandwidth (
-b): This is the target rate you’re askingiperf3to send UDP packets. If your network can handle it,iperf3will try to saturate it. If you exceed the network’s capacity, you’ll see packet loss. - Time (
-t): How long the test runs. Longer tests give a more stable average, but can be tedious.
The most surprising metric iperf3 reports for UDP is jitter. Jitter is the variation in the delay between packets. For UDP, which doesn’t reorder packets on its own, jitter is primarily caused by network congestion. When packets arrive at irregular intervals due to queuing delays in routers, the receiving application experiences jitter. High jitter can cause audio glitches or video stuttering, even if the average latency is low.
What most people don’t realize is that while UDP itself doesn’t guarantee delivery, many applications built on top of UDP implement their own reliability layers. They might use techniques like forward error correction (FEC) to add redundant data that allows the receiver to reconstruct lost packets, or they might implement application-level acknowledgments and retransmissions for critical data. This gives them the best of both worlds: the low-latency foundation of UDP and the necessary reliability for their specific needs.
The next step is understanding how to tune your operating system’s network stack to optimize UDP performance for your specific application.