Balancing Traffic Across Data Centres Using LVS

The LVS (Linux Virtual Server) project was launched in 1998 and is meant to eliminate Single Point of Failures (SPOF). According to the website: “LVS is a highly scalable and available server built on a cluster of real servers, with the load balancer running on Linux. The architecture of the server cluster is fully transparent to the end user, and the users interact as if it were a single high-performance virtual server. The real servers and the load balancers may be interconnected by either a high speed LAN or by a geographically dispersed WAN.”

The load balancer is the single entry point into the cluster. The client connects to a single known IP address, and then inside the virtual server the load balancer redirects the incoming connections to the server(s) that actually does the work according to the scheduling algorithm chosen. The nodes of the cluster (real servers) can be transparently added/removed, providing a high level of scalability. The LVS detects node failures on-the-fly and reconfigures the system accordingly, automatically, thus providing high availability. Theoretically, the load balancer can either run IPVS or KTCPVS techniques for load balancing, but owing to a very high stability of IPVS, it is used in almost all the implementations I have seen. See the sidebar titled “IPVS v/s KTCPVS” for a brief note on the differences between the two. IPVS provides Layer 4 load balancing and KTCPVS provides Layer 7 load balancing (see the sidebar).

IPVS or IP Virtual Server is an implementation of Layer 4 load balancing inside the Linux kernel. Layer 4 load balancing works on OSI Layer 4 (Transport Layer) and distributes requests to the servers at the transport layer without looking at the content of the packets.

KTCPVS or Kernel TCP Virtual Server is an implementation of Layer 7 load balancing in the Linux kernel. Layer 7 load balancing is also known as application-level load balancing. The load balancer parses requests in the application layer and distributes requests to servers based on the content. The scalability of Layer 7 load balancing is not high because of the overhead of parsing the content.

There are three load balancing techniques used in IPVS:

  1. LVS/NAT – Virtual Server via NAT
  2. LVS/TUN – Virtual Server via Tunnelling
  3. LVS/DR – Virtual Server via Direct Routing

A brief overview of these techniques can be found in the sidebar titled “IPVS Load Balancing Techniques”.

IPVS Load Balancing Techniques
LVS/NAT: This technique is one of the simplest to set up but could present an extra load on the load balancer, because the load balancer needs to rewrite both the request and response packets. The load balancer needs to also act as a default gateway for all the real servers, which does not allow the real servers to be in a geographically different network. The packet flow in this technique is as follows:

  • The load balancer examines the destination address and port number on all incoming packets from the client(s) and verifies if they match any of the virtual services being served.
  • A real server is selected from the available ones according to the scheduling algorithm and the selected packets are added to the hash tables recording the connections.
  • The destination address and port numbers on the packets are rewritten to match that of the real server and the packet is forwarded to the real server.
  • After processing the request, the real server passes the packets back to the load balancer, which then rewrites the source address and port of the packets to match that of the real service and sends it back to the client.

LVS/DR: DR stands for Direct Routing. This technique utilises MAC spoofing and demands that at least one of the load balancer’s NIC and real server’s NIC are in the same IP network segment as well as the same physical segment. In this technique, the virtual IP address is shared by the load balancer as well as all the real servers. Each real server has a loop-back alias interface configured with the virtual IP address. This loop-back alias interface must be NOARP so that it does not respond to any ARP requests for the virtual IP. The port number of incoming packets cannot be remapped, so if the virtual server is configured to listen on port 80, then real servers also need to service on port 80. The packet flow in this technique is as follows:

  • The load balancer receives the packet from the client and changes the MAC address of the data frame to one of the selected real servers and retransmits it on the LAN.
  • When the real server receives the packet, it realises that this packet is meant for the address on one of its loopback aliased interfaces.
  • The real server processes the request and responds directly to the client.

LVS/TUN: This is the most scalable technique. It allows the real servers to be present in different LANs or WANs because the communication happens with the help of the IP tunnelling protocol. The IP tunnelling allows an IP datagram to be encapsulated inside another IP datagram. This allows IP datagrams destined for one IP address to be wrapped and redirected to a different IP address. Each real server must support the IP tunnelling protocol and have one of its tunnel devices configured with the virtual IP. If the real servers are in a different network than the load balancer, then the routers in their network need to be configured to accept outgoing packets with the source address as the virtual IP.

This router reconfiguration needs to be done because the routers are typically configured to drop such packets as part of the anti-spoofing measures. Like the LVS/DR method, the port number of incoming packets cannot be remapped. The packet flow in this technique is as follows:

  • The load balancer receives the packet from the client and encapsulates the packet within an IP datagram, and forwards it to a dynamically selected real server.
  • The real server receives the packet, ‘de-encapsulates’ it and finds the inner packet with a destination IP that matches with the virtual IP configured on one of its tunnel devices.
  • The real server processes the request and returns the result directly to the user.

Since our real servers are located in two different data centres, we will be focusing on LVS/TUN.

Installing and configuring IPVS

Please note that the set-up explained here should only be used as a guideline and for an understanding of how IPVS works. Networking scenarios are different for every case and may demand extra reading and experimentation before getting a working set-up. My advice is that before trying this out in the field, make sure enough experiments have been done in the laboratory. Also, it is advisable to read through the documents in the References section at the end of the article.

On Debian and the likes, issue the following code:

# apt-get install ipvsadm keepalived

On Red Hat and the likes, use the following:

# yum install ipvsadm keepalived

The kernel module ip_vs and ipip may need to be loaded, but in my experience, these modules were automatically loaded when I used the ipvsadm command.

To start with, we will consider a scenario that has two data centres. There is one LVS load balancer in each data centre. For the sake of giving them names, we will call them ipvslb11 and ipvslb21. Now we will configure the IPIP tunnel between the load balancers and the real servers—rproxy1 and rproxy2, where rproxy1 is in the first data centre and rproxy2 is in the second.

Before we start the command configuration, have a look at Table 1 and Figure 1.

Data Centre details
Data Centre Host Interface IP Address Role
Data Centre 1 ipvslb11 eth0 VIP
Data Centre 1 ipvslb11 eth1 DIP
Data Centre 2 ipvslb21 eth0 RIP
Data Centre 2 ipvslb21 eth1 DIP
Data Centre 1 rproxy1 eth0 RIP
Data Centre 1 rproxy1 tunl0 (no ARP) VIP
Data Centre 1 rproxy1 tunl1 (no ARP) VIP
Data Centre 2 rproxy2 eth0 RIP
Data Centre 2 rproxy2 tunl0 (no ARP) VIP
Data Centre 2 rproxy2 tunl1 (no ARP) VIP website, at least the following connection information needs to be passed to the back-up from the master, which is around 24 bytes:

<Protocal, CIP:CPort, VIP:VPort, RIP:RPort, Flags, State>

Efficient synchronisation is done using UDP multicast inside the Linux kernel. The master load balancer runs the IPVS syncmaster daemon inside the kernel, passing the connection information with the UDP multicast to the back-up load balancer(s) accepting the UDP multicast packet.

On the primary load balancers in each data centre, run the following code:

ipvslb11# ipvsadm --start-daemon=master --mcast-interface=eth1
ipvslb21# ipvsadm --start-daemon=master --mcast-interface=eth1

On the back-up load balancer in each data centre, run the following:

ipvslb12# ipvsadm --start-daemon=backup --mcast-interface=eth1
ipvslb22# ipvsadm --start-daemon=backup --mcast-interface=eth1

When you want to stop the daemons, you can just run the command given below:

# ipvsadm --stop-daemon

After starting the daemon on both master and backup, we can now use Heartbeat to provide high availability in our load balancers. I am not detailing the Heartbeat set-up since a similar set-up was discussed in an article that appeared earlier. So this has been left as an exercise for readers. The important point here is that when the Heartbeat failover occurs, the IP address failover script sends out ARP requests to inform the nodes on the network that the VIP has been failed over and that they should update their ARP cache.

This completes the configuration. However, it would have been a lot better if we could use just one program to do all of the above, i.e., create the virtual server, monitor the virtual server and provide for automatic failover to the back-up, do the connection synchronisation, etc. There are a few tools available to do this. Some of these are keepalived, ultramonkey (uses ldirectord and Heartbeat, and provides some add-on features), Piranha (a Red Hat favourite), etc.

Note: If you are planning to use keepalived, ultramonkey or Piranha, do not execute any of the ipvsadm commands described above, as these applications take care of all ipvsdm functionalities. And if you have already executed them, it’s better to give all machines a reboot to clear them.


To start with, add only one real host. In our scenario, I am assuming that you have chosen rproxy1 ( as the real host. The client IP for me was

If you can’t see the Web page, then first try ping from the client to the VIP.

client# ping

If ping works then run the following tcpdump commands on the various servers:

director# tcpdump -ln -i eth1 host

You should see the IPIP tunnel packets here. But if you see any ICMP error packet report that states that it could not connect, then there is a problem at the real server end. Check the tunnel there and make sure that the link status of the tunl0 interface is marked as UP.

realserver# tcpdump -ln -i eth0 host
realserver# tcpdump -ln -i tunl0 host
realserver# tcpdump -ln -i eth0 host

If all seems to work well and you can see packets flowing across, then run the following traceroute to the client IP address on your real server. The source IP address is spoofed to be the VIP in the packets. If you cannot see the output in this command, then surely your borderline firewall or router is blocking the packets. A sample output is also shown below:

realserver# traceroute -n -s
traceroute to ( from, 30 hops max, 38 byte packets
1  10.280 ms  2.700 ms  2.625 ms
2  7.407 ms !C  2.586 ms !C  5.503 ms !C

Try to set this up on a local LAN first before moving the set-up to the data centre scenario. And after moving to the data centre scenario, set up one data centre first. This will make troubleshooting easier.

Moving further on

In this four-part series we have seen the setting-up of various components involved in providing a highly available Web infrastructure. We have also seen how to replicate this set-up in multiple data centres and have attempted to utilise all the available capacity by balancing the traffic as evenly as possible across various components located in different data centres.

This is by no means a perfect architecture and was just an attempt to demonstrate the use of various FLOSS components in running a production infrastructure. I sincerely hope that this series has been useful to you.


This entry was posted in FLOSS, Published Articles and tagged , , , . Bookmark the permalink.

Leave a Reply