UDP buffer sizes are surprisingly sticky, often defaulting to a tiny 10KB, which is a major bottleneck for high-throughput network applications.

Let’s see what a UDP socket looks like before and after tuning.

First, a quick check of a running application’s UDP socket buffer sizes on Linux. We’ll use ss for this. Imagine we have an application listening on UDP port 12345.

ss -un -o src :12345

You’ll likely see something like this for the receive buffer:

State    Recv-Q   Send-Q     Local Address:Port      Peer Address:Port
UNCONN   0        0              127.0.0.1:12345          0.0.0.0:*

The Recv-Q and Send-Q here aren’t the actual socket buffer sizes, but rather the amount of data currently queued. The real buffer sizes are set via SO_RCVBUF and SO_SNDBUF sysctls or socket options. The default is often a paltry 212992 bytes (about 208KB) for the receive buffer, and 163840 bytes (about 160KB) for the send buffer, on many Linux systems. This is a ridiculously small amount for anything trying to push significant data over UDP, like streaming or high-frequency trading.

To tune these, you have two main avenues: application-level socket options or system-wide sysctl settings.

Application-Level Tuning (Recommended for Specific Apps)

This is the most precise way, affecting only the specific socket. You’d modify your application code to set these options. Here’s a C example:

#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

int main() {
    int sockfd;
    struct sockaddr_in servaddr, cliaddr;
    socklen_t len;
    char buffer[1024];

    // Create socket
    sockfd = socket(AF_INET, SOCK_DGRAM, 0);
    if (sockfd < 0) {
        perror("socket creation failed");
        exit(EXIT_FAILURE);
    }

    // Set SO_RCVBUF to 4MB
    int rcvbuf_size = 4 * 1024 * 1024; // 4MB
    if (setsockopt(sockfd, SOL_SOCKET, SO_RCVBUF, &rcvbuf_size, sizeof(rcvbuf_size)) < 0) {
        perror("setsockopt SO_RCVBUF failed");
        // Continue, but with default buffer size
    } else {
        printf("SO_RCVBUF set to %d bytes\n", rcvbuf_size);
    }

    // Set SO_SNDBUF to 4MB
    int sndbuf_size = 4 * 1024 * 1024; // 4MB
    if (setsockopt(sockfd, SOL_SOCKET, SO_SNDBUF, &sndbuf_size, sizeof(sndbuf_size)) < 0) {
        perror("setsockopt SO_SNDBUF failed");
        // Continue, but with default buffer size
    } else {
        printf("SO_SNDBUF set to %d bytes\n", sndbuf_size);
    }

    // ... rest of your UDP server code (bind, recvfrom, etc.)
    // For demonstration, we'll just exit after setting options.
    close(sockfd);
    return 0;
}

When you run this (or a similar snippet in Python, Go, etc.), the kernel will attempt to allocate the requested buffer sizes. The actual size might be different due to kernel limits or hardware capabilities, but getsockopt can be used to verify the final values.

System-Wide Tuning (Use with Caution)

This affects all UDP sockets on the system. It’s useful if you have many applications that need larger buffers and you don’t want to modify them all.

On Linux, you modify sysctl parameters:

# Check current values
sysctl net.core.rmem_max
sysctl net.core.rmem_default
sysctl net.core.wmem_max
sysctl net.core.wmem_default

# Set max receive buffer size to 16MB
sudo sysctl -w net.core.rmem_max=16777216

# Set default receive buffer size to 4MB
sudo sysctl -w net.core.rmem_default=4194304

# Set max send buffer size to 16MB
sudo sysctl -w net.core.wmem_max=16777216

# Set default send buffer size to 4MB
sudo sysctl -w net.core.wmem_default=4194304

To make these persistent across reboots, edit /etc/sysctl.conf and add these lines, then run sudo sysctl -p.

Why This Works:

  • SO_RCVBUF (Receive Buffer): This buffer is where the kernel temporarily stores incoming UDP packets that have arrived but haven’t yet been read by your application. A larger buffer allows the kernel to absorb bursts of incoming packets without dropping them, especially important if your application can’t process them fast enough in a given moment.
  • SO_SNDBUF (Send Buffer): This buffer is used by the kernel to hold outgoing UDP packets that your application has sent but which haven’t yet been transmitted onto the network. A larger buffer lets your application write data to the socket more quickly, allowing it to continue generating data without waiting for the network stack to send it out.
  • net.core.rmem_max / net.core.wmem_max: These are the absolute maximum sizes the kernel will allow for any receive/send buffer, respectively, across all sockets.
  • net.core.rmem_default / net.core.wmem_default: These are the default sizes that sockets will be created with if not explicitly set by the application. Application-level setsockopt calls can request sizes larger than rmem_default up to rmem_max.

Common Causes for UDP Performance Issues Related to Buffers:

  1. Small Default Buffers: As mentioned, the default SO_RCVBUF (around 208KB) and SO_SNDBUF (around 160KB) are insufficient for high-bandwidth UDP.

    • Diagnosis: ss -un -o src :<port> and compare Recv-Q/Send-Q to network throughput. Use getsockopt in an application to read the actual buffer size.
    • Fix: Increase SO_RCVBUF and SO_SNDBUF via setsockopt in the application or by adjusting net.core.rmem_default/net.core.wmem_default and net.core.rmem_max/net.core.wmem_max sysctls. Example: setsockopt(sockfd, SOL_SOCKET, SO_RCVBUF, &new_size, sizeof(new_size)); with new_size = 8388608; (8MB).
    • Why it works: Provides more space for the kernel to queue incoming/outgoing datagrams, preventing drops under load.
  2. Kernel Packet Drops (UDP): Even with larger buffers, if the rate of incoming packets exceeds the rate at which the application can consume them and the buffer capacity, packets will be dropped.

    • Diagnosis: netstat -su (or ss -su) will show udp: receive errors or udp: dropped.
    • Fix: Increase SO_RCVBUF further. If that’s not enough, optimize the application’s packet processing logic to be faster, or increase the system’s net.core.rmem_max.
    • Why it works: A larger receive buffer gives the application more time to catch up before packets are discarded.
  3. Application I/O Bottleneck: The application itself might be too slow to read from or write to the UDP socket.

    • Diagnosis: Profiling the application. High CPU usage on the application threads, or Recv-Q/Send-Q consistently high in ss output even with large buffers.
    • Fix: Optimize application code, use asynchronous I/O, or multi-threading to handle UDP data.
    • Why it works: Faster application processing reduces the effective load on the kernel buffers.
  4. Network Congestion/Packet Loss: If the network path between sender and receiver is lossy or congested, UDP packets will be dropped by intermediate routers or the receiver’s NIC, not necessarily due to buffer full.

    • Diagnosis: ping with large packet sizes, mtr or traceroute to identify packet loss points. Observe udp: dropped counters on the receiving host.
    • Fix: This is a network problem, not a buffer tuning one. Requires addressing the network path (e.g., QoS, reducing traffic, fixing faulty hardware).
    • Why it works: UDP has no built-in retransmission, so lost packets are just gone. Fixing the network is the only solution.
  5. Socket Option Limits: The kernel has hard limits for SO_RCVBUF and SO_SNDBUF controlled by net.core.rmem_max and net.core.wmem_max. Applications requesting more than these limits will receive the maximum allowed.

    • Diagnosis: Use getsockopt to retrieve the actual buffer size set after setsockopt and compare it to the requested size.
    • Fix: Increase net.core.rmem_max and net.core.wmem_max via sysctl. Example: sudo sysctl -w net.core.rmem_max=33554432 (32MB).
    • Why it works: Explicitly raises the kernel-imposed ceiling for socket buffer sizes.
  6. TCP vs. UDP Confusion: Sometimes, developers mistakenly apply TCP tuning principles or expect UDP to behave like TCP. UDP is connectionless and unreliable; it doesn’t have the same flow control or congestion control mechanisms that TCP does.

    • Diagnosis: Reviewing application logic and network protocol choices.
    • Fix: Understand UDP’s nature. Tuning focuses on providing sufficient buffering to handle bursts and minimize drops, not on achieving guaranteed delivery or ordered delivery.
    • Why it works: Aligns expectations with UDP’s actual behavior.

The next rabbit hole you’ll likely fall into is understanding how the kernel dynamically adjusts UDP buffer sizes based on network conditions, which is controlled by net.ipv4.udp_mem and related sysctls.

Want structured learning?

Take the full Udp course →