QUIC’s reliance on UDP is less about UDP’s inherent speed and more about UDP’s lack of overhead, allowing QUIC to implement its own superior transport-level features.
Let’s see QUIC in action. Imagine a browser requesting a webpage.
$ curl --http3 https://example.com
When you run this, your curl client doesn’t just send a TCP SYN packet. It initiates a QUIC connection.
The QUIC Handshake (Simplified):
- Client Hello (UDP Packet): Your client sends a UDP packet to the server’s IP address and port (typically 443). This packet contains a
ClientHellomessage, similar to TLS, but it also includes QUIC-specific parameters. It’s unencrypted initially. - Server Hello (UDP Packet): The server responds with a
ServerHelloand its certificate, also in a UDP packet. This establishes the initial cryptographic parameters. - Key Derivation & 0-RTT/1-RTT: Both sides derive encryption keys. If the client has connected before and the server permits it, it can send application data in the first flight of packets (0-RTT). Otherwise, it waits for the server’s confirmation (1-RTT).
Why UDP? The Problem with TCP:
TCP, the venerable transport protocol, has a built-in set of features: reliable delivery, ordered delivery, flow control, and congestion control. These are essential, but TCP implements them at the kernel level, making them difficult to update or customize without OS patches.
- Head-of-Line Blocking (HOL): This is TCP’s Achilles’ heel for modern web traffic. If a single packet in a TCP connection is lost, all subsequent packets on that same connection, even if they arrived successfully, are held up until the lost packet is retransmitted. For HTTP/2, which multiplexes multiple requests over a single TCP connection, a packet loss for one request can stall all other requests on that connection. This is a major performance bottleneck.
- Slow Congestion Control Evolution: TCP’s congestion control algorithms (like Cubic, Reno) are designed for general internet conditions and are slow to adapt to new environments or implement newer, more efficient algorithms. Changing these often requires OS-level updates.
- Handshake Latency: The TCP handshake (SYN, SYN-ACK, ACK) takes at least one round-trip time (RTT). Then, the TLS handshake adds one or two more RTTs. This means establishing a secure connection can take 2-3 RTTs before any actual data is sent.
QUIC’s Solution: Building on UDP
QUIC is essentially a new transport protocol implemented in user space over UDP. UDP provides a simple, low-level datagram service: send data from A to B, with no guarantees. QUIC then reimplements all the necessary transport features on top of this UDP foundation, but in a way that overcomes TCP’s limitations.
- No Head-of-Line Blocking (at the Transport Layer): QUIC uses streams. Each stream is an independent, ordered sequence of bytes. If a packet carrying data for one stream is lost, only that specific stream is affected. Other streams on the same QUIC connection can continue to make progress. This is a fundamental improvement for multiplexed protocols like HTTP/3.
- Faster Connection Establishment: QUIC combines the transport and TLS handshakes. A successful QUIC handshake can be completed in 1-RTT (or even 0-RTT for returning clients), significantly reducing latency.
- Pluggable Congestion Control: Because QUIC is in user space, developers can deploy and iterate on new congestion control algorithms (like BBR) much faster than is possible with TCP, which is baked into the OS kernel.
- Connection Migration: QUIC connections are identified by a Connection ID, not the IP address and port tuple like TCP. This allows a client’s IP address or port to change (e.g., switching from Wi-Fi to cellular) without breaking the connection. The server continues to recognize the connection via its Connection ID.
The Internal Levers:
When you configure a QUIC-enabled server or client, you’re not just tweaking UDP settings (there are very few). You’re configuring QUIC’s internal state machine, its cryptographic parameters, its stream management, and its congestion control algorithm.
For instance, when setting up nginx with QUIC, you’d enable HTTP/3 and specify the UDP port.
server {
listen 443 http2 quic reuseport;
listen [::]:443 http2 quic reuseport;
server_name example.com;
# ... other SSL/TLS settings ...
}
The reuseport option is crucial for UDP servers handling high traffic, allowing multiple processes or threads to listen on the same port.
The real magic of QUIC isn’t that it uses UDP; it’s that UDP’s lack of built-in features gave us a clean slate to design a modern transport protocol. It’s a testament to how a protocol can be designed for the internet today, rather than the internet of the 1980s.
The next step in understanding QUIC’s performance benefits involves diving into the specifics of its stream multiplexing and how it handles packet loss at the application layer.