TCP’s congestion control algorithms are the unsung heroes of the internet, and the fact that your browser can download a large file without the connection grinding to a halt is a testament to their sophistication.

Let’s see CUBIC in action. Imagine a server sending a large file to a client over a network with some latency and packet loss.

# On the server (simulating CUBIC)
iperf3 -s -C cubic

# On the client (simulating CUBIC)
iperf3 -c <server_ip> -t 30

You’ll observe the bandwidth fluctuating, but generally trending upwards as CUBIC probes for available capacity. It starts conservatively, then aggressively expands its sending window. When it detects congestion (packet loss), it drastically cuts its window, then slowly ramps back up. This "window-widening" and "window-cutting" dance is the core of congestion control.

CUBIC, the default in Linux, is designed for high-speed, long-latency networks (like those found in data centers or between continents). It uses a cubic function to determine how quickly to increase its sending window. This means it can ramp up to fill available bandwidth much faster than older algorithms like Reno. The key idea is to exploit the available bandwidth without causing excessive congestion.

Reno, the venerable ancestor, uses a simpler linear increase and multiplicative decrease. When congestion is detected, it halves its window size. This is a more cautious approach, but it can be slow to utilize high-bandwidth, low-latency links effectively. It tends to leave bandwidth on the table.

BBR (Bottleneck Bandwidth and Round-trip propagation time), developed by Google, takes a different tack. Instead of reacting to packet loss, BBR tries to measure the network’s bottleneck bandwidth and minimum round-trip time. It then aims to send data at a rate that fills the bottleneck without exceeding it, thus avoiding packet loss altogether. This makes it particularly good in environments with significant bufferbloat or where packet loss is due to causes other than genuine congestion.

Here’s a typical BBR configuration on Linux:

# Enable BBR
sudo sysctl net.ipv4.tcp_congestion_control=bbr
sudo sysctl net.core.default_qdisc=fq
sudo sysctl net.ipv4.tcp_fastopen=3

# Verify
sysctl net.ipv4.tcp_congestion_control
sysctl net.core.default_qdisc

With BBR, you’ll often see less fluctuation in bandwidth and a more consistent throughput, especially if the network path has large buffers that can absorb bursts of traffic. It tries to operate in a "lossless" regime.

The mental model for these algorithms is about how they react to or predict network conditions. Reno and CUBIC are loss-based: they assume packet loss means congestion and back off. BBR is a hybrid model, using both bandwidth and RTT measurements to estimate congestion. CUBIC’s cubic function is a mathematical way to achieve faster window growth in the face of high bandwidth-delay products, allowing it to reach and utilize that bandwidth more quickly than Reno’s linear ramp-up.

The surprising thing about BBR is its ability to maintain high throughput even when there’s packet loss, provided that loss isn’t due to a true bottleneck. It can "out-perform" loss-based algorithms in scenarios where buffers are large, because it’s not trying to avoid packet loss at all costs, but rather to operate at the true capacity of the link.

The next step in TCP optimization often involves exploring variations of these algorithms or considering application-level strategies to manage data flow more intelligently.

Want structured learning?

Take the full Tcp course →