The LVS (Linux Virtual Server) project was launched in 1998 and is meant to eliminate Single Point of Failures (SPOF). According to the linuxvirtualserver.org website: “LVS is a highly scalable and available server built on a cluster of real servers, with the load balancer running on Linux. The architecture of the server cluster is fully transparent to the end user, and the users interact as if it were a single high-performance virtual server. The real servers and the load balancers may be interconnected by either a high speed LAN or by a geographically dispersed WAN.”
The load balancer is the single entry point into the cluster. The client connects to a single known IP address, and then inside the virtual server the load balancer redirects the incoming connections to the server(s) that actually does the work according to the scheduling algorithm chosen. The nodes of the cluster (real servers) can be transparently added/removed, providing a high level of scalability. The LVS detects node failures on-the-fly and reconfigures the system accordingly, automatically, thus providing high availability. Theoretically, the load balancer can either run IPVS or KTCPVS techniques for load balancing, but owing to a very high stability of IPVS, it is used in almost all the implementations I have seen. See the sidebar titled “IPVS v/s KTCPVS” for a brief note on the differences between the two. IPVS provides Layer 4 load balancing and KTCPVS provides Layer 7 load balancing (see the sidebar).
IPVS v/s KTCPVS |
---|
IPVS or IP Virtual Server is an implementation of Layer 4 load balancing inside the Linux kernel. Layer 4 load balancing works on OSI Layer 4 (Transport Layer) and distributes requests to the servers at the transport layer without looking at the content of the packets.
KTCPVS or Kernel TCP Virtual Server is an implementation of Layer 7 load balancing in the Linux kernel. Layer 7 load balancing is also known as application-level load balancing. The load balancer parses requests in the application layer and distributes requests to servers based on the content. The scalability of Layer 7 load balancing is not high because of the overhead of parsing the content. |
There are three load balancing techniques used in IPVS:
- LVS/NAT – Virtual Server via NAT
- LVS/TUN – Virtual Server via Tunnelling
- LVS/DR – Virtual Server via Direct Routing
A brief overview of these techniques can be found in the sidebar titled “IPVS Load Balancing Techniques”.
IPVS Load Balancing Techniques |
---|
LVS/NAT: This technique is one of the simplest to set up but could present an extra load on the load balancer, because the load balancer needs to rewrite both the request and response packets. The load balancer needs to also act as a default gateway for all the real servers, which does not allow the real servers to be in a geographically different network. The packet flow in this technique is as follows:
LVS/DR: DR stands for Direct Routing. This technique utilises MAC spoofing and demands that at least one of the load balancer’s NIC and real server’s NIC are in the same IP network segment as well as the same physical segment. In this technique, the virtual IP address is shared by the load balancer as well as all the real servers. Each real server has a loop-back alias interface configured with the virtual IP address. This loop-back alias interface must be NOARP so that it does not respond to any ARP requests for the virtual IP. The port number of incoming packets cannot be remapped, so if the virtual server is configured to listen on port 80, then real servers also need to service on port 80. The packet flow in this technique is as follows:
LVS/TUN: This is the most scalable technique. It allows the real servers to be present in different LANs or WANs because the communication happens with the help of the IP tunnelling protocol. The IP tunnelling allows an IP datagram to be encapsulated inside another IP datagram. This allows IP datagrams destined for one IP address to be wrapped and redirected to a different IP address. Each real server must support the IP tunnelling protocol and have one of its tunnel devices configured with the virtual IP. If the real servers are in a different network than the load balancer, then the routers in their network need to be configured to accept outgoing packets with the source address as the virtual IP. This router reconfiguration needs to be done because the routers are typically configured to drop such packets as part of the anti-spoofing measures. Like the LVS/DR method, the port number of incoming packets cannot be remapped. The packet flow in this technique is as follows:
|
Since our real servers are located in two different data centres, we will be focusing on LVS/TUN.
Installing and configuring IPVS
Please note that the set-up explained here should only be used as a guideline and for an understanding of how IPVS works. Networking scenarios are different for every case and may demand extra reading and experimentation before getting a working set-up. My advice is that before trying this out in the field, make sure enough experiments have been done in the laboratory. Also, it is advisable to read through the documents in the References section at the end of the article.
On Debian and the likes, issue the following code:
# apt-get install ipvsadm keepalived
On Red Hat and the likes, use the following:
# yum install ipvsadm keepalived
The kernel module ip_vs
and ipip
may need to be loaded, but in my experience, these modules were automatically loaded when I used the ipvsadm
command.
To start with, we will consider a scenario that has two data centres. There is one LVS load balancer in each data centre. For the sake of giving them names, we will call them ipvslb11
and ipvslb21
. Now we will configure the IPIP tunnel between the load balancers and the real servers—rproxy1
and rproxy2
, where rproxy1
is in the first data centre and rproxy2
is in the second.
Before we start the command configuration, have a look at Table 1 and Figure 1.
Data Centre details | ||||
---|---|---|---|---|
Data Centre | Host | Interface | IP Address | Role |
Data Centre 1 | ipvslb11 | eth0 | 192.168.1.214/24 | VIP |
Data Centre 1 | ipvslb11 | eth1 | 192.168.1.13/24 | DIP |
Data Centre 2 | ipvslb21 | eth0 | 192.168.2.214/24 | RIP |
Data Centre 2 | ipvslb21 | eth1 | 192.168.2.13/24 | DIP |
Data Centre 1 | rproxy1 | eth0 | 192.168.1.14/24 | RIP |
Data Centre 1 | rproxy1 | tunl0 (no ARP) | 192.168.1.214/32 | VIP |
Data Centre 1 | rproxy1 | tunl1 (no ARP) | 192.168.2.214/32 | VIP |
Data Centre 2 | rproxy2 | eth0 | 192.168.2.2/24 | RIP |
Data Centre 2 | rproxy2 | tunl0 (no ARP) | 192.168.1.214/32 | VIP |
Data Centre 2 | rproxy2 | tunl1 (no ARP) | 192.168.2.214/32 | VIP |