LVS can actually do more than just simple round-robin; it can perform content-aware load balancing at Layer 7 if you feed it the right information.
Let’s see LVS in action. Imagine we have a web service running on three backend servers:
192.168.1.101:80192.168.1.102:80192.168.1.103:80
We want to balance traffic for www.example.com (which resolves to our LVS VIP 10.0.0.1) across these backends.
First, we need to install ipvsadm:
sudo apt-get update && sudo apt-get install ipvsadm
Now, let’s set up the LVS virtual server. We’ll use the rr (round-robin) scheduling algorithm for simplicity here, but LVS supports many others.
# Clear any existing rules
sudo ipvsadm --clear
# Add the virtual service for HTTP traffic on VIP 10.0.0.1, port 80
sudo ipvsadm -A -t 10.0.0.1:80 -s rr
# Add the backend servers to the virtual service
sudo ipvsadm -a -t 10.0.0.1:80 -r 192.168.1.101:80 -m
sudo ipvsadm -a -t 10.0.0.1:80 -r 192.168.1.102:80 -m
sudo ipvsadm -a -t 10.0.0.1:80 -r 192.168.1.103:80 -m
The -m flag indicates that LVS will use Destination NAT (DNAT) to rewrite the destination IP address of incoming packets to one of the real servers.
You can verify the configuration with:
sudo ipvsadm -Ln
This will show:
IP Virtual Server version 1.3.1 (taken from ipvs package)
Prot LocalAddress:Port Kernel Scheduler TotalConn InPkts OutPkts
TCP 10.0.0.1:80 rr 0 0 0
-> 192.168.1.101:80 Route 0 0 0
-> 192.168.1.102:80 Route 0 0 0
-> 192.168.1.103:80 Route 0 0 0
Now, when a client sends a request to http://www.example.com (which resolves to 10.0.0.1), LVS intercepts it. Based on the rr scheduler, it picks one of the real servers (192.168.1.101, 192.168.1.102, or 192.168.1.103), changes the destination IP address in the packet to that server’s IP, and forwards it. The response from the real server is also routed back through LVS due to the masquerade (or -m) setting, allowing LVS to rewrite the source IP address so the client sees the response coming from the VIP.
LVS is fundamentally a kernel-level component. It operates by manipulating the IP routing tables within the Linux kernel itself. When a packet arrives destined for a Virtual IP (VIP) address configured in LVS, the kernel, before consulting its regular routing table, checks if the destination IP and port match any LVS virtual services. If a match is found, LVS takes over. It selects a real server based on its configured scheduling algorithm and then performs Network Address Translation (NAT) to change the packet’s destination IP to that of the chosen real server. For return traffic, LVS ensures that packets originating from the real servers are routed back through itself, so it can perform Source NAT (SNAT) to make the response appear to come from the VIP, maintaining the connection state. This kernel-level operation is what gives LVS its high performance and low overhead, as it doesn’t require a separate user-space daemon to process every packet.
The "real" power of LVS comes from its various scheduling algorithms and connection balancing modes. While rr is simple, wrr (weighted round-robin) allows you to assign different capacities to backend servers. lc (least connection) is often preferred for long-lived connections, as it directs new requests to the server with the fewest active connections. Beyond these, LVS can also operate in NAT mode (the default, -m), Direct Routing (DR, -g), and Tunneling (TUN, -i). DR is particularly popular for high-performance setups because it bypasses NAT, sending packets directly to the real server and having the real server reply directly to the client, reducing load on the LVS box. However, DR has specific network configuration requirements, such as requiring the real servers’ network interfaces to be aliased with the VIP.
The one thing most people don’t realize about LVS is that the scheduler itself doesn’t actually track connection counts or response times dynamically; it requests this information from the kernel’s connection tracking system. When you use -s lc (least connection), LVS isn’t polling each backend server. Instead, it’s asking the kernel’s IP connection tracking module, "Which of these destinations currently has the fewest active connections associated with this virtual service?" The kernel maintains this state, and LVS merely queries it at the time a new connection needs to be established. This tight integration with the kernel’s connection tracking is crucial for its efficiency.
The next step in mastering LVS is exploring its different connection balancing modes beyond NAT, particularly Direct Routing (DR) for performance and understanding its health checking mechanisms.