TCP flow control is actually a mechanism designed to prevent a fast sender from overwhelming a slow receiver, but the way it achieves this is by making the sender think it’s being overwhelmed by the receiver’s own limitations.
Let’s watch this happen. Imagine a sender has 100 MB of data to send to a receiver.
Sender: SYN ->
<- SYN-ACK
Sender: ACK ->
Sender: [Data Block 1 (1MB)] ->
<- ACK (for 1MB)
Sender: [Data Block 2 (1MB)] ->
<- ACK (for 2MB)
Sender: [Data Block 3 (1MB)] ->
<- ACK (for 3MB)
This looks pretty straightforward. The sender sends a chunk of data, the receiver acknowledges it, and the sender sends the next chunk. But what happens if the receiver’s application can’t process data as fast as the network delivers it?
The receiver has a buffer, a holding area for incoming data before the application reads it. If this buffer fills up, the receiver can’t accept any more data from the sender. This is where TCP’s "sliding window" comes into play.
The receiver advertises its available buffer space, its "receive window," in the TCP header of its acknowledgments.
Sender: SYN ->
<- SYN-ACK (Receive Window: 64KB)
Sender: [Data Block 1 (1MB)] ->
<- ACK (for 1MB, Receive Window: 56KB)
Sender: [Data Block 2 (1MB)] ->
<- ACK (for 2MB, Receive Window: 48KB)
Sender: [Data Block 3 (1MB)] ->
<- ACK (for 3MB, Receive Window: 40KB)
Notice how the advertised window shrinks. The sender is limited to sending only as much unacknowledged data as the receiver’s current advertised window allows. If the receiver’s buffer is full, it advertises a window of 0.
Sender: [Data Block 10 (1MB)] ->
<- ACK (for 10MB, Receive Window: 0KB)
Now, the sender stops sending data. It enters a "persist" state, periodically sending a tiny probe packet (1 byte) to the receiver to re-request the window size. This prevents the connection from deadlocking if the receiver could process data but simply forgot to update the window size.
Sender: [Probe Packet (1 byte)] ->
<- ACK (for 10MB, Receive Window: 0KB)
Sender: [Probe Packet (1 byte)] ->
<- ACK (for 10MB, Receive Window: 0KB)
Eventually, the receiver’s application will read some data from its buffer.
Receiver Application: Reads 0.5MB from buffer.
<- ACK (for 10MB, Receive Window: 32KB)
The receiver advertises a larger window, and the sender can resume sending data, but only up to that new window size. The "window" effectively slides forward as data is acknowledged and buffer space becomes available.
The critical lever here is the net.ipv4.tcp_rmem and net.ipv4.tcp_wmem sysctl parameters on Linux. tcp_rmem defines the minimum, default, and maximum receive buffer sizes for TCP sockets. tcp_wmem does the same for send buffers.
For example, on a Linux server, you might see:
sysctl net.ipv4.tcp_rmem
# net.ipv4.tcp_rmem = 4096 87380 6291456
This means:
- Minimum: 4096 bytes (small, but ensures basic operation).
- Default: 87380 bytes (the initial receive window size used).
- Maximum: 6291456 bytes (the largest the receive buffer can grow to).
If your receiver application is slow, and the sender is sending data faster than the application can consume it, the receiver’s buffer will fill up. The advertised window will shrink, and the sender will slow down. If the default tcp_rmem is too small for your expected traffic, you might see performance issues where the sender is unnecessarily throttled. Increasing the maximum value of tcp_rmem allows the receiver to buffer more data, potentially smoothing out bursts and allowing the sender to maintain higher throughput even if the receiver’s application has temporary hiccups.
The sender also has a send window, governed by tcp_wmem, which limits how much data it can have in flight. However, the receiver’s advertised window is usually the more restrictive factor in preventing a fast sender from overwhelming a slow receiver.
The real surprise is that TCP flow control isn’t just about preventing network congestion; it’s fundamentally about managing the disparity between application processing speed and network delivery speed at the receiver, using the sender as the instrument of that control.
The next concept you’ll bump into is how TCP handles packet loss and retransmissions, which interacts surprisingly with the flow control window.