Interconnecting QEMU and VirtualBox Virtual Machines

Sometime back I came across VirtualBox, an excellent replacement for commercial Vmware. I installed few OSes in VirtualBox and found the performance pretty good. In my opinion the performance of Windows virtual machines is much better in VirtualBox, perhaps because of the installed guest additions.

Immediately a thought came into my mind. What if I want to create an internetwork of VirtualBox virtual machines and QEMU virtual machines.

The default NAT networking in VirtualBox does not solve any purpose. The only useful networking option I found was the “Host Interface”. I went through several documents on VirtualBox website and realise that I needed to use bridging in order to do this. All the documents explained beautifully how to accompalish bridging with the ethernet interface on the host machine, but this is not what I wanted. I wanted the virtual machines to be in an isolated network connected together using VDE Switch as I was doing with QEMU. Searching on google for quite sometime did not help either, so I decided to give it a try.

Following is what I did to accompalish this. After executing the following –

  1. QEMU virtual machines can talk to VirtualBox virtual machines and vice-versa.
  2. Host can talk to all virtual machines and vice-versa.
  3. All virtual machines can connect to internet via the hostmachine.

As suggested on the VirtualBox website, in order to use bridging I need the uml-utilities and bridge-utilities packages:

$ sudo apt-get install uml-utilities bridge-utils

I need to load the tun/tap module and the kqemu module for QEMU

$ sudo modprobe kqemu
$ sudo modprobe tun

Next I wanted to make the “tun” device writeable by the users, so I out myself into the “vboxusers” group and gave the group write permission on the device.

$ sudo chgrp vboxusers /dev/net/tun 
$ sudo chmod g+rw /dev/net/tun

Then I created couple of persistent tap interfaces owned by an ordinary user:

$ sudo tunctl -t tap0 -u ajitabhp
$ sudo tunctl -t tap1 -u ajitabhp

Next I started the VDE Switch connected to the tap0 interface and make the control file world writeable:

$ vde_switch -tap tap0 -daemon
$ chmod 666 /tmp/vde.ctl

A bridge then needs to be created:

$ sudo brctl addbr br0

All interfaces which are connected to network segments to be bridged are to be put into promiscuous mode.

$ sudo ifconfig tap0 0.0.0.0 promisc

Add the tap0 interface to the bridge and start the DNSMASQ server to listen on br0 interface.

$ sudo brctl addif br0 tap0
$ sudo /usr/sbin/dnsmasq --log-queries --user=named --dhcp-leasefile=/var/lib/misc/vbox-dhcpd.leases --dhcp-range=10.111.111.100,10.111.111.199,255.255.255.0,10.255.255.255,8h --interface=br0 --domain=virtual.lan --addn-hosts=/etc/my-host

Next configure the br0 interface with an ip address. This br0 interface will work as the gateway for the virtual machines.

$ sudo ifconfig br0 10.111.111.254 netmask 255.255.255.0 up

Now add the tap1 interface to the bridge and bring it up.

$ sudo brctl addif br0 tap1 
$ sudo ifconfig tap1 up

Enable IP forwarding on the host machine and setup MASQUERADING:

$ sudo su -
# echo "1" > /proc/sys/net/ipv4/ip_forward
$ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Following is how various interfaces look like after finishing this setup.

$ brctl show br0
bridge name     bridge id               STP enabled     interfaces
br0             8000.ae7729b64a80       no              tap0
							tap1
$ ifconfig 
br0       Link encap:Ethernet  HWaddr 6E:EA:39:83:8C:D4  
          inet addr:10.111.111.254  Bcast:10.111.111.255  Mask:255.255.255.0
          inet6 addr: fe80::6cea:39ff:fe83:8cd4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:106 errors:0 dropped:0 overruns:0 frame:0
          TX packets:82 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:10696 (10.4 KiB)  TX bytes:10165 (9.9 KiB)

eth0      Link encap:Ethernet  HWaddr 00:12:F0:28:6E:C3  
          inet addr:192.168.0.176  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::212:f0ff:fe28:6ec3/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:426 errors:32 dropped:32 overruns:0 frame:0
          TX packets:327 errors:0 dropped:0 overruns:0 carrier:1
          collisions:0 txqueuelen:1000 
          RX bytes:356289115 (339.7 MiB)  TX bytes:18085137 (17.2 MiB)
          Interrupt:11 Base address:0xa000 Memory:d0200000-d0200fff 
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:7425 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7425 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:610482 (596.1 KiB)  TX bytes:610482 (596.1 KiB)

tap0      Link encap:Ethernet  HWaddr 6E:EA:39:83:8C:D4  
          inet6 addr: fe80::6cea:39ff:fe83:8cd4/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:81 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500 
          RX bytes:0 (0.0 b)  TX bytes:10808 (10.5 KiB)

tap1      Link encap:Ethernet  HWaddr B2:03:C1:BE:1E:4E  
          inet6 addr: fe80::b003:c1ff:febe:1e4e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:106 errors:0 dropped:0 overruns:0 frame:0
          TX packets:46 errors:0 dropped:6 overruns:0 carrier:0
          collisions:0 txqueuelen:500 
          RX bytes:12180 (11.8 KiB)  TX bytes:4978 (4.8 KiB)
$ sudo iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
MASQUERADE  0    --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Now start the QEMU virtual machines:

$ vdeqemu -net vde,vlan=0 -net nic,vlan=0,macaddr=52:54:00:00:EE:01 -m 256 -hda ~/qemu/netbsd.img 1>~/logs/qemu-logs/netbsd.log 2>~/logs/qemu-logs/netbsd.error &

In VirtualBox attach the network interface of all virtual machines to “Host Interface” and then select tap1 as the interface name.

Posted in FLOSS | Tagged , , | Leave a comment

Setting Up A Local Yum Repository

Introduction

Yum (Yello Dog Updater Modified) is a popular high level package management tool for RPM (Red Hat Package Manager) packages. RPM packages are used by several popular distributions based on Red Hat Linux. Some of these distributions are Red Hat Enterpriose Linux (RHEL), CentOS, Fedora Core, SuSe, OpenSuSe, Mandrivia etc. CentOS is built from Red Hat Enterprise sources and hence can share the packages with RHEL and Fedora Core. SuSe and Mandrake RPMs have a slightly different format, as a result they are somewhat incompatible with the rest of the distributions.

Irrespective of the differences in the RPM file format, each of these distributions is capable of maintaining a yum repository for package management. In my experience, infrastructures based on Red Hat will have a mix of RHEL, Fedora Core and CentOS servers and the RPMs can be shared among these versions with slight care of keeping the dependencies in mind.

With the release of Red Hat Enterprise 5, Red Hat has formally dumped the ancient package manager up2date and have started using yum as the official package manager. Fedora and CentOS were using yum anyway from quite long. Even with yum there is a difference in the repository structure between older and newer versions. The newer tool to maintain the repository metadata is called as ‘createrepo‘ and the repository it creates is repomod. For creating and maintaining old style yum repository a tool called yum-arch is used. yum-arch is now depricated. Following table briefly presents the tool and the type of repository supported by it:

Yum Tools
Tool Package Provides
genbase apt apt support
yum-arch yum yum support
createrepo createrepo repomod support (new yum and new apt)

To the best of my knowledge, the up2date program which comes with the RHEL 4 Update 5 does not support the newer yum repositories created using createrepo. So if you have such servers in your setup then you are better off also creating the old style repo using the yum-arch command and also the new style repo using the createrepo command. Both the headers can co-exists simultaneously. The new headers are stored by createrepo in a directory called repodata whereas the old headers are stored by yum-arch in a directory called headers.

I have had opportunity to set up a local yum repositories at my work place and also for several clients.

This document describes a repository server on CentOS 5 for CentOS 5. In between I have also mentioned some points worth noting if you have RHEL 4 or older servers in your setup or you wanted to create an RHEL 4 server as a repository server.

Its not necessary to have a CentOS, Fedora Core or RHEL server as a repository server. I have very recently setup a Debian Etch server to do the same task. Starting from Debian Etch and Ubuntu Fiesty “createrepo” command is available as a Deb package.

Repository Server

In order to create a Yum repository server for your organisation following software is required:

  • A webserver, or FTP server or NFS server to serve the repository to clients.
  • createrepo package which is available with CentOS, Fedora Core and RHEL 5. For RHEL 4 this needs to be installed from external sources. One good source is Dag Wieers RPM repository.

On CentOS do the following to install the dependencies:

# yum install httpd createrepo

On RHEL 4 (or RHEL 3) server we need to download createrepo, python-urlgrabber, sqlite, python-sqlite, python-elementtree and yum packages from the Dag Wieers repository as they do not have these two packages in the official repository. To install these packages for RHEL4 following commands can be used.

$ sudo up2date -i httpd
$ wget http://dag.wieers.com/rpm/packages/python-urlgrabber/python-urlgrabber-2.9.7-1.2.el4.rf.noarch.rpm
$ wget http://dag.wieers.com/rpm/packages/createrepo/createrepo-0.4.6-1.el4.rf.noarch.rpm
$ wget http://dag.wieers.com/rpm/packages/yum/yum-2.4.2-0.4.el4.rf.noarch.rpm
$  wget http://dag.wieers.com/rpm/packages/python-elementtree/python-elementtree-1.2.6-7.el4.rf.x86_64.rpm
$ wget http://dag.wieers.com/rpm/packages/python-sqlite/python-sqlite-1.0.1-1.2.el4.rf.x86_64.rpm
$ wget http://dag.wieers.com/rpm/packages/sqlite/sqlite-2.8.17-1.el4.rf.x86_64.rpm
$ sudo rpm -Uvh *.rpm
$ rm -f *.rpm 

Client

On the client machines which will be configured to use this repository we would need the following installed:

  • Yum (depends on python-elementtree, python-sqlite and sqlite)

On CentOS, Fedora Core and RHEL 5 these should already be installed. On RHEL 4 and older they can be installed from Dag Wieers RPM repository. Although up2date from RHEL 4 update 2 onwards supports yum and apt repositories but still I have seen problems quite often and anyways, its abandones by Red Hat from RHEL 5 onwards.

Following is how we can installed yum on a RHEL 4 (x86_64) machine:

$ wget http://dag.wieers.com/rpm/packages/sqlite/sqlite-2.8.17-1.el4.rf.x86_64.rpm
$ wget http://dag.wieers.com/rpm/packages/python-sqlite/python-sqlite-1.0.1-1.2.el4.rf.x86_64.rpm
$ wget http://dag.wieers.com/rpm/packages/python-elementtree/python-elementtree-1.2.6-7.el4.rf.x86_64.rpm
$ wget http://dag.wieers.com/rpm/packages/yum/yum-2.4.2-0.4.el4.rf.noarch.rpm
$ sudo rpm -Uvh sqlite-2.8.17-1.el4.rf.x86_64.rpm python-sqlite-1.0.1-1.2.el4.rf.x86_64.rpm python-elementtree-1.2.6-7.el4.rf.x86_64.rpm yum-2.4.2-0.4.el4.rf.noarch.rpm

Creating a Directory Tree

I normally create local repositories over http protocol, so I have created the directory tree in the web root.

$ mkdir /var/www/html/yum/CentOS/5/i386/os
$ mkdir /var/www/html/yum/CentOS/5/i386/extras
$ mkdir /var/www/html/yum/CentOS/5/i386/updates

Next copy all the RPMs from the distribution CDs as follows:

$ cd /var/www/html/yum/CentOS/5/i386/os
$ for num_cd in 1 2 3 4 5 6;do read "Enter CD number $num_cd. Press any key when ready.." key_press;cp -ar /media/cdrom/* /var/www/html/yum/CentOS/5/i386/os/ && eject;done  

In case you have the ISO files, you can mount them as follows and copy the files:

$ mkdir temp_mnt
$ for num in 1 2 3 4 5 6; do mount -o loop CentOS5-i386-disc$num.iso temp_mnt;cp -ra temp_mnt/* /var/www/html/yum/CentOS/5/i386/os/;umount temp_mnt;done

Since we will be generating our own headers we need to remove the repodata directory from the copied tree. In CentOS 5 the repodata directory was in the root of the distribution CD:

$ rm -rf /var/www/html/yum/CentOS/5/os/repodata

Please note that in RHEL 5 CDs there are multiple repodata directories one per channel, they can be removed as follows:

$ rm -rf /var/www/html/yum/RHEL/5/i386/os/Cluster/repodata
$ rm -rf /var/www/html/yum/RHEL/5/i386/os/ClusterStorage/repodata
$ rm -rf /var/www/html/yum/RHEL/5/i386/os/Server/repodata
$ rm -rf /var/www/html/yum/RHEL/5/i386/os/VT/repodata

The reason why we need to remove this repodata directory and generate our own is because we have copied this tree from CD and hence the base URL for all the RPMs in the headers is stored as media:.

Creating The Base OS Repository

createrepois a bourne shell script which in turn just calls the underlying python program called /usr/share/createrepo/genpkgmetadata.py. To create a yum repository of all the packages from base operating systems we need to execute the createrepo command in the directory which we want to be turned into a repository as follows:

$ createrepo /var/www/html/yum/CentOS/5/i386/os/

The createrepo command will create all the required headers in the repodata directory. Normally we won't add any new RPM file in the base distribution, so we won't need to generate these headers again. But if for some reason we copy a new RPM file in the base repository or replace an existing RPM file for various reasons, then we need to regenerate the headers with createrepo command.

Configuring a client to use this base repository

On the client machine, remove the information for existing standard distribution repositories. The various configuration for repositories will be present either in etc/yum.conf file or /etc/yum.repos.d directory. If its the later, remove all the files from the /etc/yum.repos.d directory and if its the former, delete all lines defining the default distribution repositories.

The IP address for my local yum server is 192.168.23.43.

The following is an example of CentOS yum client, change it appropriately for a Fedora Core or RHEL yum client. Usually the repositories can be mixed also, but that may in some cases lead to dependency problems.

$ sudo rm -f /etc/yum.repos.d/*
$ sudo vi /etc/yum.repos.d/centos-base.repo 
[base]
name=CentOS-$releasever - Base
baseurl=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/
gpgcheck=1
gpgkey=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/RPM-GPG-KEY-CentOS-5

#released updates 
#[updates]
#name=CentOS-$releasever - Updates
#baseurl=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/updates/
#gpgcheck=1
#gpgkey=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/RPM-GPG-KEY-CentOS-5

Don't bother about the second section containing the updates repository, we have not configured it yet and will be doing it shortly. This is the reason why the second repository has been commented out.

Now to start using this new base repository, do the following:

$ sudo yum clean all
$ sudo yum update

Now any package in the repository can be installed using ''yum install'' and so on.

Setting up Updates and Extras Repository

After the base repository has been setup, it can only be used for installing new packages, but what about the security fixes/patches released by the distribution providers for existing packages? We need an updates repository for applying such packages. There is no hard and fast rule that we need to have an updates repository. So, if you prefer to have only one repository containing an updated/patched version of RPM packages then you can do that as well. All you need to do is replace the concerned RPM file in the base repository with an patched/updated RPM file and then re-run the createrepocommand to re-generate the headers. After that a yum update on the client side will update the concerned RPM.

However, its always advisable to keep the updated packages in a seperate repository for better management and not to touch the base repository after setup. This is the standard approach and this is what we are going to do next.

To start with we need a source from where we can get the updated packages. Choosing one of the mirrors of the distribution in question and continuously syncing the updates from it is a good idea.

I synchronise with my chosen mirrors for CentOS and Fedora Core as follows:

$ rsync rsync://mirrors.kernel.org/centos/5/updates/i386/RPMS/*.rpm /var/www/html/yum/CentOS/5/i386/updates/
$ rsync rsync://mirrors.kernel.org/centos/5/addons/i386/RPMS/*.rpm /var/www/html/yum/CentOS/5/i386/extras/
$ rsync rsync://mirrors.kernel.org/centos/5/centosplus/i386/RPMS/*.rpm /var/www/html/yum/CentOS/5/i386/extras/
$ rsync rsync://mirrors.kernel.org/centos/5/extras/i386/RPMS/*.rpm /var/www/html/yum/CentOS/5/i386/extras/
$ rsync rsync://mirrors.kernel.org/centos/5/fasttrack/i386/RPMS/*.rpm /var/www/html/yum/CentOS/5/i386/extras/

This rsync job can be run as a periodic cron job. The frequency depends on how critical updates are for you. Usually once a day is good enough. After initial sync we can convert this collection of RPMs into a repository by using createrepo utility.

$ createrepo /var/www/html/yum/CentOS/5/i386/updates/
$ createrepo /var/www/html/yum/CentOS/5/i386/extras/

Now we have the updates repository ready, so we can uncomment the commented out section in /etc/yum.repos.d/centos-base.repo file so that it now looks as follows:

$ sudo vi /etc/yum.repos.d/centos-base.repo
[base]
name=CentOS-$releasever - Base
baseurl=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/
gpgcheck=1
gpgkey=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/RPM-GPG-KEY-CentOS-5

# released updates
[updates]
name=CentOS-$releasever - Updates
baseurl=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/updates/
gpgcheck=1
gpgkey=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/RPM-GPG-KEY-CentOS-5

# Other CentOS repositories
[extras]
name=CentOS-$releasever - Extras
baseurl=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/extras/
gpgcheck=1
gpgkey=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/RPM-GPG-KEY-CentOS-5

In some cases we have found that the company firewall is blocking all outgoing connections and the only available connections are http and that too using a proxy server. In such cases, we have found that a program called lftp is extremely helpful. RPM packages of lftp are available from here Dag Wieers repository. After installing this we can create a cron job from the following quick script:

$ vi ~/bin/lftp_centos_repo_sync.sh
#!/bin/sh
export http_proxy=http://localproxy.local:8080
export HTTP_PROXY=$http_proxy
cd /var/www/html/yum/CentOS/5/i386/updates
lftp -c mget "http://mirrors.kernel.org/centos/5/updates/i386/RPMS/*.rpm"
createrepo /var/www/html/yum/CentOS/5/i386/updates
cd /var/www/html/yum/CentOS/5/i386/extras
lftp -c mget "http://mirrors.kernel.org/centos/5/fasttrack/i386/RPMS/*.rpm"
lftp -c mget "http://mirrors.kernel.org/centos/5/addons/i386/RPMS/*.rpm"
lftp -c mget "http://mirrors.kernel.org/centos/5/centosplus/i386/RPMS/*.rpm"
lftp -c mget "http://mirrors.kernel.org/centos/5/extras/i386/RPMS/*.rpm"
createrepo /var/www/html/yum/CentOS/5/i386/extras
$ chmod +x ~/bin/lftp_centos_repo_sync.sh

Restricting a repository for a particular set of packages

Sometimes it is desirable to add a repository in local yum configuration, but restrict it to a certain number of packages only. This can be done by adding includepkgs line in the repository config as follows:

$ cat /etc/yum.repos.d/dag-wieers.repo
[dag]
name=Dag-CentOS-$releasever
baseurl=http://192.168.23.43/yum/RHEL/$releasever/$ARCH/dag/
gpgcheck=1
gpgkey=http://192.168.23.43/yum/RHEL/$releasever/$ARCH/dag/RPM-GPG-KEY.dag.txt
includepkgs=clamav clamav-devel clamav-db unrar

This will make sure that only listed packages are downloaded/updated from the local dag repository.

Posted in FLOSS | Tagged , , , | Leave a comment

Monitoring Lotus Notes/Domino Servers

Very recently I was asked to setup Nagios to monitor the Lotus Notes/Domino Servers. There were some around 500 plus servers across the globe. It was an all Windows shop and the current monitoring was being done using GSX, HP Systems Insight Manager and IBM Director. The client wanted a comprehensive solution so that they have a single monitoring interface to look at and after an initial discussion they decided to go ahead with Nagios.

This document looks at monitoring Lotus Notes/Domino servers using SNMP through Nagios. I have provided some of the required OIDs and their initial warning and critical threshold values in tabular format. There are many more interesting OIDs listed in the domino.mib file. Also I have attached the Nagios commands definition file and service definition files at the end of the document. In order to use certain checks, some plugins are required which can be downloaded from http://www.barbich.net/websvn/wsvn/nagios/nagios/plugins/check_lotus_state.pl.

Note – I recently found that the required plugins are not available on the original site anymore, so I have made my copy available with this document. You can download the scripts from the link at the bottom of the document.

To start with I asked the windows administrators to install the Lotus/Domino SNMP Agent on all servers and after that I got hold of a copy of domino.mib file which is located in C:\system32.

Next I listed all the interesting parameters from the domino.mob file and started querying a set of test servers to find out if a value is being returned or not. Following is the OID list and what each OID means. Most of these checks are only valid in the Active node. This is important to know if the Domino servers are in a HA cluster (active-standby pair). If there is only one Domino Server then these checks will apply.

Moinitoring Checks on Active Node

Monitoring Checks on Active Node
Nagios Service Check OID Description Threshholds (w- warning, c-critical)
dead-mail enterprises.334.72.1.1.4.1.0 Number of dead (undeliverable) mail messages w 80, c 100
routing-failures enterprises.334.72.1.1.4.3.0 Total number of routing failures since the server started w 100, c 150
pending-routing enterprises.334.72.1.1.4.6.0 Number of mail messages waiting to be routed w10, c 20
pending-local enterprises.334.72.1.1.4.7.0 Number of pending mail messages awaiting local delivery w 10, c 20
average-hops enterprises.334.72.1.1.4.10.0 Average number of server hops for mail delivery w 10, c 15
max-mail-delivery-time enterprises.334.72.1.1.4.12.0 Maximum time for mail delivery in seconds w 300, c@600
router-unable-to-transfer enterprises.334.72.1.1.4.19.0 Number of mail messages the router was unable to transfer w 80, c100
mail-held-in-queue enterprises.334.72.1.1.4.21.0 Number of mail messages in message queue on hold w 80, c 100
mails-pending enterprises.334.72.1.1.4.31.0 Number of mail messages pending w@80, c@100
mailbox-dns-pending enterprises.334.72.1.1.4.34.0 Number of mail messages in MAIL.BOX waiting for DNS w 10, c 20
databases-in-cache enterprises.334.72.1.1.10.15.0 The number of databases currently in the cache. Administrators should monitor this number to see whether it approaches the NSF_DBCACHE_MAXENTRIES setting. If it does, this indicates the cache is under pressure. If this situation occurs frequently, the administrator should increase the setting for NSF_DBCACHE_MAXENTRIES w 80, c 100
database-cache-hits enterprises.334.72.1.1.10.17.0 The number of times an lnDBCacheInitialDbOpen is satisfied by finding a database in the cache. A high ‘hits-to-opens’ ratio indicates the database cache is working effectively, since most users are opening databases in the cache without having to wait for the usual time required by an initial (non-cache) open. If the ratio is low (in other words, more users are having to wait for databases not in the cache to open), the administrator can increase the NSF_DBCACHE_MAXENTRIES w, c
database-cache-overcrowding enterprises.334.72.1.1.10.21.0 The number of times a database is not placed into the cache when it is closed because lnDBCacheCurrentEntries equals or exceeds lnDBCacheMaxEntries*1.5. This number should stay low. If it begins to rise, you should increase the NSF_DbCache_Maxentries settings w 10, c 20
replicator-status enterprises.334.72.1.1.6.1.3.0 Status of the Replicator task
router-status enterprises.334.72.1.1.6.1.4.0 Status of the Router task
replication-failed enterprises.334.72.1.1.5.4.0 Number of replications that generated an error
server-availability-index enterprises.334.72.1.1.6.3.19.0 Current percentage index of server’s availability. Value range is 0-100. Zero (0) indicates no available resources; a value of 100 indicates server completely available

Interesting OIDs to plot for trend analysis

Interesting OIDs to plot for Trend Analysis
enterprises.334.72.1.1.4.2.0 Number of messges received by router
enterprises.334.72.1.1.4.4.0 Total number of mail messages routed since the server started
enterprises.334.72.1.1.4.5.0 Number of messages router attempted to transfer
enterprises.334.72.1.1.4.8.0 Notes server’s mail domain
enterprises.334.72.1.1.4.11.0 Average size of mail messages delivered in bytes
enterprises.334.72.1.1.4.13.0 Maximum number of server hops for mail delivery
enterprises.334.72.1.1.4.14.0 Maximum size of mail delivered in bytes
enterprises.334.72.1.1.4.15.0 Minimum time for mail delivery in seconds
enterprises.334.72.1.1.4.16.0 Minimum number of server hops for mail delivery
enterprises.334.72.1.1.4.17.0 Minimum size of mail delivered in bytes
enterprises.334.72.1.1.4.18.0 Total mail transferred in kilobytes
enterprises.334.72.1.1.4.20.0 Count of actual mail items delivered (may be different from delivered which counts individual messages)
enterprises.334.72.1.1.4.26.0 Peak transfer rate
enterprises.334.72.1.1.4.27.0 Peak number of messages transferred
enterprises.334.72.1.1.4.32.0 Number of mail messages moved from MAIL.BOX via SMTP
cache cmd hit rate enterprises.334.72.1.1.15.1.24.0
cache db hit rate enterprises.334.72.1.1.15.1.26.0
hourly access denials enterprises.334.72.1.1.11.6.0
req per 5 min enterprises.334.72.1.1.15.1.13.0
unsuccesfull run enterprises.334.72.1.1.11.9.0

Files and Scripts

Posted in FLOSS | Tagged , | 18 Comments

Apache LDAP Authentication

mod_auth_ldap modules allows an LDAP directory to be used to store the database for HTTP Basic authentication. This document describes an example implementation in Red Hat Enterprise Linux 4. This document also applies to any Linux distribution in general, provided the mod_auth_ldap module is loaded.

I have used Microsoft Active Directory as my LDAP server as that’s what I had at the time of writing this. But any LDAP Server will do for this.

Setting up webserver

On Red Hat Enterprise Linux 4, when the httpd package is installed, mod_auth_ldap gets installed with it. By default Red Hat Enterprise 4 httpd.conf file does not allow the overriding of any setting by the .htaccess file. Following are the default settings:

<Directory />
    Options FollowSymLinks
    AllowOverride None
</Directory>

<Directory "/var/www/html">
    Options Indexes FollowSymLinks
    AllowOverride None
    .......
    .......
</Directory>

I changed the settings for /var/www/html to:

<Directory "/var/www/html">
    Options Indexes FollowSymLinks
    AllowOverride AuthConfig
    .......
    .......
</Directory>

This enabled me to put the required authentication directives in the .htaccess files. You need to have administrative access to the web server or get this done from your administrator.

Next we need to find whether the mod_auth_ldap module is being loaded or not This can be done as follows on RHEL4

$ grep mod_auth_ldap /etc/httpd/conf/httpd.conf
LoadModule auth_ldap_module modules/mod_auth_ldap.so

Test Setup

I have created a directory test_auth in the DocumentRoot which I want to have restrictive access using ldap authentication. Following commands will create the required directory and an index.html file in it.

$ sudo mkdir /var/www/html/test_auth
$ sudo cat >>/var/www/html/test_auth/index.html <<__EOF__
<html>
<head><title>Test page</title></head>
<body><h1>Test page</h1><p>Hello World!</p></body>
</html>
__EOF__

Now we can create an .htaccess file containing the required authentication directives:

$ sudo vi /var/www/html/test_auth/.htaccess
AuthType Basic
AuthName "Restricted Access"
AuthLDAPEnabled on
AuthLDAPURL 'ldap://msadc01.unixclinic.net:389/ou=Users and Machines,ou=IN,dc=unixclinic,dc=net?sAMAccountName?sub?(memberOf=cn=Infrastructure Team,ou=Groups,ou=Users and Machines,ou=IN,dc=unixclinic,dc=net)'
AuthLDAPBindDN "apache_ldap_query@unixclinic.net"
AuthLDAPBindPassword pA554Auth
require valid-user

The AuthLDAPURL specifies the LDAP server, the base DN, the attribute to use in the search, as well as the extra search filter to use and is a single line without any line breaks. The URL specifies that the access is restricted to the members of the “Infrastructure Team”. AuthLDAPBindDN is an optional DN (Distinguished Name) to use in binding to the LDAP server. If this is not specified then mod_auth_ldap will use anonymous bind. Most professionally setup LDAP Servers (and Active Directory Servers) does not allow anonymous binds against the directory.

Resources

Posted in FLOSS | Tagged , , , | Leave a comment

Using SNAT for Highly Available Services

Problem

Often network based services are restricted to a particular source IP address. A common example is SNMP. A good system/network administrator will restrict access to the SNMP daemon from a particular host, usually a central management server. Sometimes these central management servers are HA pair. Under these circumstances, a service address can be used for the active node. This service address has access to the desired networking resource. Heartbeat usually will start this service IP address as a resource on the active node. This will result in the Active node taking over the IP address, which enables the node to listen on that IP address for incoming requests. But this still does not solve the problem of active node attempting to access a network resource, because all packets originating from this node will bear the primary IP address of this node and not the secondary address(es) or aliased address(es).

Solution

For such cases, SNAT (Source Network Address Translation) can be useful. Using SNAT we can ask the kernel to change the source IP addresses on all outgoing packets. But the IP address which we want on our packets must be present either as a primary or secondary or aliased IP address. This can be checked as:

# ip addr show bond0
6: bond0:  mtu 1500 qdisc noqueue
    link/ether 00:18:fe:89:df:d8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.3/16 brd 172.16.255.255 scope global bond0
    inet 192.168.1.2/16 brd 172.16.255.255 scope global secondary bond0:0
    inet 192.168.1.1/16 brd 172.16.255.255 scope global secondary bond0:1
    inet6 fe80::218:feff:fe89:dfd8/64 scope link
       valid_lft forever preferred_lft forever

or

# ifconfig bond0
bond0     Link encap:Ethernet  HWaddr 00:18:FE:89:DF:D8
          inet addr:192.168.1.3  Bcast:172.16.255.255  Mask:255.255.0.0
          inet6 addr: fe80::218:feff:fe89:dfd8/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:53589964 errors:0 dropped:0 overruns:0 frame:0
          TX packets:25857501 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:40502210697 (37.7 GiB)  TX bytes:4148482317 (3.8 GiB)

Instead of specifying a interface, all interfaces can also be viewed using:

# ip addr show
1: lo:  mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast master bond0 qlen 1000
    link/ether 00:18:fe:89:df:d8 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::218:feff:fe89:dfd8/64 scope link
       valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc pfifo_fast master bond0 qlen 1000
    link/ether 00:18:fe:89:df:d8 brd ff:ff:ff:ff:ff:ff
4: sit0:  mtu 1480 qdisc noop
    link/sit 0.0.0.0 brd 0.0.0.0
6: bond0:  mtu 1500 qdisc noqueue
    link/ether 00:18:fe:89:df:d8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.3/16 brd 172.16.255.255 scope global bond0
    inet 192.168.1.2/16 brd 172.16.255.255 scope global secondary bond0:0
    inet 192.168.1.1/16 brd 172.16.255.255 scope global secondary bond0:1
    inet6 fe80::218:feff:fe89:dfd8/64 scope link
       valid_lft forever preferred_lft forever

or

# ifconfig
bond0     Link encap:Ethernet  HWaddr 00:18:FE:89:DF:D8
          inet addr:192.168.1.3  Bcast:172.16.255.255  Mask:255.255.0.0
          inet6 addr: fe80::218:feff:fe89:dfd8/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:53587551 errors:0 dropped:0 overruns:0 frame:0
          TX packets:25855600 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:40501872867 (37.7 GiB)  TX bytes:4148267377 (3.8 GiB)

bond0:0   Link encap:Ethernet  HWaddr 00:18:FE:89:DF:D8
          inet addr:192.168.1.2  Bcast:172.16.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1

bond0:1   Link encap:Ethernet  HWaddr 00:18:FE:89:DF:D8
          inet addr:192.168.1.1  Bcast:172.16.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1

eth0      Link encap:Ethernet  HWaddr 00:18:FE:89:DF:D8
          inet6 addr: fe80::218:feff:fe89:dfd8/64 Scope:Link
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:53587551 errors:0 dropped:0 overruns:0 frame:0
          TX packets:25855600 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:40501872867 (37.7 GiB)  TX bytes:4148267377 (3.8 GiB)
          Interrupt:185

eth1      Link encap:Ethernet  HWaddr 00:18:FE:89:DF:D8
          UP BROADCAST SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
          Interrupt:193

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:536101 errors:0 dropped:0 overruns:0 frame:0
          TX packets:536101 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:59243777 (56.4 MiB)  TX bytes:59243777 (56.4 MiB)

My NICs are bonded and hence bond0 is the interface I use.

Setting Up SNAT

In Linux IPTables can be used to setup SNAT.
To change the source IP address of all packets going out of the box to anywhere, following rule can be used:

$ sudo /sbin/iptables -t nat -A POSTROUTING -o bond0 -j SNAT --to-source 192.168.1.1

The result can be seen as follows:

$ sudo /sbin/iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
SNAT       all  --  anywhere             anywhere            to:192.168.1.1

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

I normally restrict SNAT to selected services and destination IP addresses only. The following three IPTables command respectively translates the source address for all packets destined for 10.199.65.191 to 192.168.1.1, only ICMP packets destined for 192.168.2.4 and all packets destined for network 192.168.1.0/24:

$ sudo /sbin/iptables -t nat -A POSTROUTING -d 10.199.65.191 -o bond0 -j SNAT --to-source 192.168.1.1
$ sudo /sbin/iptables -t nat -A POSTROUTING -d 192.168.2.4 -p ICMP -o bond0 -j SNAT --to-source 192.168.1.1
$ sudo /sbin/iptables -t nat -A POSTROUTING -d 192.168.1.0/24 -o bond0 -j SNAT --to-source 192.168.1.1

The result of all these commands can be seen as:

$ sudo /sbin/iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
SNAT       all  --  anywhere             anywhere            to:192.168.1.1
SNAT       all  --  anywhere             10.199.65.191       to:192.168.1.1
SNAT       icmp --  anywhere             192.168.2.4         to:192.168.1.1
SNAT       all  --  anywhere             192.168.1..0/24     to:192.168.1.1

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Setting Heartbeat and IPTables for SNAT

The /etc/ha.d/haresources file in heartbeat can be set to accept the desired IP address as a resource and associate it with a script which can start/stop/restart these IPTables rules.

$ sudo vi /etc/ha.d/haresources
node01 192.168.1.1 iptables

Red Hat and Fedora has such a script and which is located in /etc/init.d/iptables. This script reads a file /etc/sysconfig/iptables, which contains various rules in iptables-save format. I had created a similar script for Debian and derivatives distributions which reads the rules from /etc/iptables file. The script is given below:

#! /bin/sh
# Script      - iptables
# Description - Read IPTables rule from a file in iptables-save format.
# Author      - Ajitabh Pandey 
#
PATH=/usr/sbin:/usr/bin:/sbin:/bin
DESC="IPTables Configuration Script"
NAME=iptables
DAEMON=/sbin/$NAME
SCRIPTNAME=/etc/init.d/$NAME

# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0

# Load the VERBOSE setting and other rcS variables
[ -f /etc/default/rcS ] && . /etc/default/rcS

if [ ! -e /etc/iptables ]
then
        echo "no valid iptables config file found!"
        exit 1
fi

case "$1" in
  start)
        echo "Starting $DESC:" "$NAME"
        /sbin/iptables-restore /etc/iptables
        ;;
  stop)
        echo "Stopping $DESC:" "$NAME"
        $DAEMON -F -t nat
        $DAEMON -F
        ;;
  restart|force-reload)
        echo "Restarting $DESC:" "$NAME"
        $DAEMON -F -t nat
        $DAEMON -F
        /sbin/iptables-restore /etc/iptables
        ;;
  *)
        echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload}" >&2
        exit 3
        ;;
esac

Following is a sample iptables rules file, in iptables-save format.

*nat
:PREROUTING ACCEPT [53:8294]
:POSTROUTING ACCEPT [55:11107]
:OUTPUT ACCEPT [55:11107]

# Allow all ICMP packets to be SNATed
-A POSTROUTING  -p ICMP -o bond0 -j SNAT --to-source 192.168.0.1

# Allow packets destined for SNMP port (161) on local network to be SNATed
-A POSTROUTING -d 192.168.0.0/24 -p tcp -m tcp --dport snmp -o bond0 -j SNAT --to-source 192.168.0.1
-A POSTROUTING -d 192.168.0.0/24 -p udp -m udp --dport snmp -o bond0 -j SNAT --to-source 192.168.0.1

# These are for the time servers on internet
-A POSTROUTING -p tcp -m tcp --dport ntp -o bond0 -j SNAT --to-source 192.168.0.1
COMMIT

*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [144:12748]
COMMIT
Posted in FLOSS | Tagged , , | Leave a comment