UDP is the internet’s unsung hero for speed, letting applications ditch the handshake and just send data, but that freedom comes with a responsibility to handle reliability yourself.

Let’s see it in action. Imagine a simple UDP chat application.

Server (udp_chat_server.py)

import socket

SERVER_IP = '127.0.0.1'
SERVER_PORT = 12345

server_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
server_socket.bind((SERVER_IP, SERVER_PORT))

print(f"UDP server listening on {SERVER_IP}:{SERVER_PORT}")

while True:
    data, client_address = server_socket.recvfrom(1024) # Buffer size is 1024 bytes
    message = data.decode()
    print(f"Received from {client_address}: {message}")

    # Echo message back to the client
    server_socket.sendto(data, client_address)

Client (udp_chat_client.py)

import socket
import time

SERVER_IP = '127.0.0.1'
SERVER_PORT = 12345
CLIENT_PORT = 54321 # Optional: bind client to a specific port

client_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
if CLIENT_PORT:
    client_socket.bind((SERVER_IP, CLIENT_PORT)) # Bind client to a port

message = "Hello, UDP!"
print(f"Sending: {message}")
client_socket.sendto(message.encode(), (SERVER_IP, SERVER_PORT))

# Try to receive the echoed message
# We'll use a timeout to avoid blocking indefinitely if the server doesn't respond
client_socket.settimeout(2.0) # 2-second timeout

try:
    data, server_address = client_socket.recvfrom(1024)
    echoed_message = data.decode()
    print(f"Received echo from {server_address}: {echoed_message}")
except socket.timeout:
    print("No echo received within the timeout period.")

client_socket.close()

When you run the server and then the client, you’ll see the client send a message, and the server will echo it back. This is the core of a UDP communication: fire and forget, with an optional acknowledgement if you build it in.

The fundamental problem UDP solves is the overhead of TCP. TCP establishes a connection with a three-way handshake (SYN, SYN-ACK, ACK), guarantees ordered delivery, and handles retransmissions. This is great for reliability but adds latency. UDP, on the other hand, is connectionless. When you send a UDP packet, it’s just sent out. There’s no guarantee it will arrive, no guarantee it will arrive in order, and no guarantee it won’t be duplicated.

This makes UDP ideal for applications where speed is paramount and some data loss is acceptable, or where the application layer can handle reliability. Think of online gaming, Voice over IP (VoIP), streaming video, and DNS. In our chat example, the server chooses to echo the message back, acting as a simple form of acknowledgement. If the client doesn’t receive the echo, it knows something went wrong, and it can choose to resend.

The key levers you control with UDP are:

  • Packet Size: UDP has a theoretical maximum payload of 65,507 bytes. However, the practical limit is often dictated by the Maximum Transmission Unit (MTU) of the underlying network, typically around 1500 bytes for Ethernet. Sending packets larger than the MTU will result in fragmentation, which can increase latency and the chance of loss.
  • Buffering: Both the client and server have receive buffers. recvfrom(buffer_size) defines how many bytes can be read at once. If data arrives faster than your application can process it, packets can be dropped from the buffer.
  • Timeouts and Retries: Since UDP doesn’t guarantee delivery, your application must implement its own mechanisms for handling lost packets. This typically involves setting timeouts on recvfrom calls and deciding whether to resend data if an acknowledgement isn’t received.
  • Application-Level Sequencing: If order matters, you need to add sequence numbers to your UDP packets and reorder them at the receiving end.
  • Checksums: While UDP has an optional checksum for data integrity, it’s often disabled for performance reasons, or it’s assumed that the network layer will handle this.

The surprising truth about UDP’s speed is how much of the "heavy lifting" TCP does that you can simply skip. TCP’s reliability features, while essential for many applications, are computationally expensive. By removing the connection setup, flow control, and reliable delivery guarantees, UDP dramatically reduces the per-packet overhead. This is why, for high-throughput, low-latency scenarios, UDP is often the go-to. You’re essentially trading guaranteed delivery for raw speed, and it’s a trade-off that pays off when latency is the enemy.

One aspect often overlooked is how the operating system’s network stack handles UDP. While you’re sending packets, the OS is busy managing buffers, queues, and scheduling. If your application isn’t reading from the receive buffer fast enough, packets can be dropped before your recvfrom call even has a chance to see them. This is a common source of perceived packet loss that isn’t necessarily a network issue but an application throughput issue.

The next step in building a fast UDP application is often implementing a custom reliability layer, like a sliding window protocol, on top of UDP.

Want structured learning?

Take the full Udp course →