QUIC might seem like a strange choice for building a reliable transport protocol because it runs over UDP, a protocol that offers no reliability guarantees whatsoever.

Let’s see QUIC in action with a simple HTTP/3 request. Imagine a browser requesting a webpage from a server.

Client (Browser) -> SYN (QUIC handshake, UDP packet) -> Server
Server -> SYN-ACK (QUIC handshake, UDP packet) -> Client
Client -> ACK (QUIC handshake, UDP packet) -> Server

Client -> GET /index.html (HTTP/3 frame, UDP packet) -> Server
Server -> HTTP/3 Response (various frames, UDP packets) -> Client

Notice how it’s all just UDP packets. The "magic" is that QUIC implements its own reliability, congestion control, and multiplexing within these UDP packets.

The core problem QUIC solves is TCP’s Head-of-Line (HOL) blocking. In TCP, if a single packet in a stream is lost, the entire connection pauses until that packet is retransmitted. This happens even if subsequent packets have already arrived and are ready to be processed. On the web, where multiple resources (images, scripts, CSS) are often requested simultaneously over separate TCP connections (or even within the same one using HTTP/2), a single packet loss can stall the loading of the entire page. QUIC, by design, doesn’t have this problem. It multiplexes streams independently at the transport layer. If a packet for one stream is lost, only that specific stream is affected; other streams continue to make progress.

Here’s how it works internally:

  1. Connection Establishment: QUIC uses a 0-RTT or 1-RTT handshake, combining the transport handshake (like TCP’s SYN/SYN-ACK) with the TLS 1.3 handshake. This is significantly faster than TCP’s 3-way handshake followed by a separate TLS handshake.
  2. Packet Structure: QUIC packets contain a packet number, and within them, individual "frames." These frames can be for different streams (e.g., STREAM frames carrying HTTP/3 data) or control messages (e.g., ACK frames, PING frames).
  3. Stream Multiplexing: Each logical stream (like an HTTP request/response pair) gets its own sequence of packets. If a packet for stream 1 is lost, the receiver can still process all the packets for stream 2 that have arrived. The sender only needs to retransmit the missing packet for stream 1.
  4. Reliability & Congestion Control: QUIC implements its own mechanisms for packet acknowledgment, retransmission, and congestion control (similar to TCP’s algorithms like Cubic or BBR) directly within the UDP payload. This allows for faster iteration and improvement of these critical network functions, independent of operating system kernel updates.
  5. Encryption: QUIC mandates TLS 1.3 encryption for all connections. This means even the handshake is encrypted, improving privacy and security.

The levers you control are primarily at the application layer (HTTP/3) and through server configuration. For example, you’d configure your web server (like Nginx or Caddy) to support HTTP/3, which implies QUIC.

# Example Caddyfile for HTTP/3
yourdomain.com {
    # ... other directives
    reverse_proxy /api/* localhost:8080
    reverse_proxy /assets/* localhost:8081
    # Caddy automatically enables HTTP/3 if the client supports it
}

The most surprising aspect is how QUIC’s reliability and congestion control, which are fundamental to TCP’s operation and have been honed over decades within operating system kernels, are now being implemented and iterated upon in user-space libraries. This shift allows for much more rapid deployment of new networking features and performance improvements, as it bypasses the slow release cycles of OS kernel updates. For instance, a new congestion control algorithm can be deployed to all QUIC clients and servers that update their libraries, without waiting for operating system patches.

The next concept you’ll likely encounter is how HTTP/3, which runs over QUIC, fundamentally changes request prioritization and resource loading on the web.

Want structured learning?

Take the full Tcp course →