BBR’s core innovation isn’t about making congestion control smarter, but about bypassing congestion control altogether when possible.

Let’s see BBR in action. Imagine a server 192.168.1.10 and a client 192.168.1.20, separated by a simulated bottleneck link. We’ll use iperf3 to generate traffic.

First, ensure your kernel supports BBR. It’s been in the Linux kernel since 4.9.

On both the server and client, enable BBR:

sudo sysctl net.ipv4.tcp_congestion_control=bbr

Now, run iperf3 on the server to listen:

iperf3 -s

And on the client, initiate the transfer:

iperf3 -c 192.168.1.10

You’ll likely see higher throughput than with traditional algorithms like Cubic, especially on networks with high Bandwidth-Delay Product (BDP).

BBR stands for Bottleneck Bandwidth and Round-trip Propagation time. It works by maintaining two primary models:

  1. Bandwidth model: Estimates the maximum bandwidth available on the path between sender and receiver.
  2. RTT model: Estimates the minimum Round-Trip Time (RTT) on the path.

Instead of reacting to packet loss (like Cubic), BBR actively probes for available bandwidth by increasing its sending rate until it hits the estimated bottleneck bandwidth. It then maintains this rate, only backing off if its RTT model detects an increase in latency, which BBR infers as a sign of congestion before packet loss occurs.

The key levers you control are net.core.default_qdisc and net.ipv4.tcp_congestion_control.

net.core.default_qdisc should be fq (Fair Queue) or fq_codel. fq provides per-flow queuing, which is essential for BBR to accurately model individual flows. fq_codel adds AQM (Active Queue Management) to fq, which helps prevent bufferbloat.

sudo sysctl net.core.default_qdisc=fq

net.ipv4.tcp_congestion_control is where you set bbr.

sudo sysctl net.ipv4.tcp_congestion_control=bbr

The tuning parameters for BBR itself are often managed automatically by the kernel, but you can influence them via sysctl variables starting with net.ipv4.tcp_bbr_. The most relevant ones are:

  • net.ipv4.tcp_bbr_min_rtt: The minimum RTT BBR will use as its baseline, in microseconds. If not set, BBR samples it.
  • net.ipv4.tcp_bbr_bandwidth_target: A target bandwidth multiplier. BBR aims to deliver traffic at target * bottleneck_bandwidth. Default is 1.0.

To explicitly set a minimum RTT (e.g., 5ms), you’d use:

sudo sysctl net.ipv4.tcp_bbr_min_rtt=5000

And to set a bandwidth target multiplier of 1.2:

sudo sysctl net.ipv4.tcp_bbr_bandwidth_target=1.2

BBR’s models are continuously updated. When BBR sees an RTT increase beyond its minimum estimate, it assumes congestion is building and throttles its sending rate to match the current RTT. It then tries to find the new bottleneck bandwidth and minimum RTT. This means BBR is less aggressive about filling up buffers than traditional loss-based algorithms, which can be beneficial for latency-sensitive applications but might leave some network capacity unused on very stable, high-bandwidth links.

When you enable BBR, you’re telling the TCP stack to prioritize measuring the network path’s capacity (bandwidth and latency) over reacting to packet loss. It actively probes for bandwidth and uses RTT measurements to infer congestion, aiming to keep buffers as empty as possible while maximizing throughput. This is a fundamental shift from algorithms that consider packet loss the primary signal for congestion.

The next challenge you’ll encounter is understanding how BBR interacts with middleboxes that might perform TCP Option stripping or have stateful inspection that doesn’t understand BBR’s specific pacing behavior.

Want structured learning?

Take the full Tcp course →