UDP is often called "unreliable" for a reason: it doesn’t guarantee delivery, order, or even that a packet will arrive only once.
Imagine a game server sending player positions to 100 clients 60 times a second. If you used TCP, every single one of those position updates would need to be acknowledged by every client before the next update could be sent. This handshake, repeated thousands of times per second, would quickly become a bottleneck, leading to lag spikes and a choppy experience. UDP, on the other hand, just fires off packets. The server sends the position data, and if a packet gets lost or arrives out of order, the game logic simply uses the next available update. This "fire and forget" approach is crucial for real-time applications like games where low latency is paramount.
Here’s a simplified look at how that might play out in a game loop, using a hypothetical Rust-like pseudocode.
// Server-side
let socket = UdpSocket::bind("0.0.0.0:8080").unwrap();
let mut game_state = GameState::new();
loop {
// Simulate game logic
game_state.update();
// Send game state to all connected clients
for client_addr in game_state.clients.iter() {
let serialized_state = serialize(&game_state.player_data);
socket.send_to(&serialized_state, client_addr).unwrap_or_else(|e| {
eprintln!("Failed to send to {}: {}", client_addr, e);
0 // Don't panic on send error
});
}
// Short sleep to control tick rate
std::thread::sleep(Duration::from_millis(16)); // ~60 FPS
}
// Client-side
let socket = UdpSocket::bind("0.0.0.0:0").unwrap(); // Bind to any available port
socket.connect("server_ip:8080").unwrap();
let mut last_known_state = GameState::new();
let mut buffer = [0u8; 1024];
loop {
match socket.recv_from(&mut buffer) {
Ok((num_bytes, _src_addr)) => {
let received_data = &buffer[..num_bytes];
let player_data: PlayerData = deserialize(received_data).unwrap();
last_known_state.update_from_network(player_data);
}
Err(ref e) if e.kind() == io::ErrorKind::WouldBlock => {
// No data yet, continue processing game logic
last_known_state.process_local_input();
continue;
}
Err(e) => {
panic!("Error receiving data: {}", e);
}
}
// Render the game based on last_known_state
render(&last_known_state);
}
The core problem UDP solves here is throughput vs. reliability. TCP’s reliability mechanisms (acknowledgments, retransmissions, ordering) add overhead and latency. For game state updates, where a slightly older position is better than a delayed one, UDP’s raw speed is king. The game client is responsible for interpolating between received states to smooth out any packet loss or jitter.
Internally, UDP is a very thin layer on top of IP. It adds port numbers to IP packets, allowing multiple applications on the same machine to communicate. When a UDP packet arrives at the destination machine, the operating system looks at the destination port and hands the packet to the application listening on that port. That’s it. There’s no connection establishment, no flow control, no congestion control.
The "resilience" in UDP for games doesn’t come from UDP itself, but from the protocols built on top of it. Developers implement custom reliability layers. This typically involves:
- Sequence Numbers: Each outgoing packet gets a unique, incrementing sequence number. The receiver can then detect duplicates (same sequence number) and missing packets (gaps in sequence numbers).
- Acknowledgments (ACKs): For critical data (like player actions or game events), the receiver sends back ACKs for received packets. If an ACK isn’t received within a timeout, the sender can retransmit the packet. This is where you build your own "reliable UDP."
- Delta Compression: Instead of sending the entire game state every time, send only the changes since the last acknowledged state. This significantly reduces bandwidth.
- Interest Management: Only send updates about entities that are relevant to a specific client (e.g., players nearby).
You control the tick rate (how often the server sends updates) and the packet payload size. A higher tick rate means more frequent updates but also higher bandwidth and CPU usage. Larger payloads can be more efficient per packet but increase latency if they need to be fragmented at the IP layer. The choice of which data to send and how to serialize it is also a key lever.
The most surprising thing about building reliable protocols over UDP is how much complexity you have to replicate from TCP, but with custom tuning for your specific use case. For instance, simply acknowledging every packet is often too much overhead. A common technique is to ACK a range of packets (e.g., "I’ve received up to sequence number 123") or to implement a probabilistic acknowledgment system where only a subset of packets are ACKed. This allows you to achieve a desired level of reliability without the strict guarantees (and thus, latency) of TCP.
The next challenge you’ll face is handling packet fragmentation and reassembly at the IP layer, which UDP itself doesn’t manage.