Ajitabh Pandey's Soul & Syntax

Exploring systems, souls, and stories – one post at a time

Author: Ajitabh

  • Setting Up A Local Yum Repository

    Introduction

    Yum (Yello Dog Updater Modified) is a popular high level package management tool for RPM (Red Hat Package Manager) packages. RPM packages are used by several popular distributions based on Red Hat Linux. Some of these distributions are Red Hat Enterpriose Linux (RHEL), CentOS, Fedora Core, SuSe, OpenSuSe, Mandrivia etc. CentOS is built from Red Hat Enterprise sources and hence can share the packages with RHEL and Fedora Core. SuSe and Mandrake RPMs have a slightly different format, as a result they are somewhat incompatible with the rest of the distributions.

    Irrespective of the differences in the RPM file format, each of these distributions is capable of maintaining a yum repository for package management. In my experience, infrastructures based on Red Hat will have a mix of RHEL, Fedora Core and CentOS servers and the RPMs can be shared among these versions with slight care of keeping the dependencies in mind.

    With the release of Red Hat Enterprise 5, Red Hat has formally dumped the ancient package manager up2date and have started using yum as the official package manager. Fedora and CentOS were using yum anyway from quite long. Even with yum there is a difference in the repository structure between older and newer versions. The newer tool to maintain the repository metadata is called as ‘createrepo‘ and the repository it creates is repomod. For creating and maintaining old style yum repository a tool called yum-arch is used. yum-arch is now depricated. Following table briefly presents the tool and the type of repository supported by it:

    Yum Tools
    Tool Package Provides
    genbase apt apt support
    yum-arch yum yum support
    createrepo createrepo repomod support (new yum and new apt)

    To the best of my knowledge, the up2date program which comes with the RHEL 4 Update 5 does not support the newer yum repositories created using createrepo. So if you have such servers in your setup then you are better off also creating the old style repo using the yum-arch command and also the new style repo using the createrepo command. Both the headers can co-exists simultaneously. The new headers are stored by createrepo in a directory called repodata whereas the old headers are stored by yum-arch in a directory called headers.

    I have had opportunity to set up a local yum repositories at my work place and also for several clients.

    This document describes a repository server on CentOS 5 for CentOS 5. In between I have also mentioned some points worth noting if you have RHEL 4 or older servers in your setup or you wanted to create an RHEL 4 server as a repository server.

    Its not necessary to have a CentOS, Fedora Core or RHEL server as a repository server. I have very recently setup a Debian Etch server to do the same task. Starting from Debian Etch and Ubuntu Fiesty “createrepo” command is available as a Deb package.

    Repository Server

    In order to create a Yum repository server for your organisation following software is required:

    • A webserver, or FTP server or NFS server to serve the repository to clients.
    • createrepo package which is available with CentOS, Fedora Core and RHEL 5. For RHEL 4 this needs to be installed from external sources. One good source is Dag Wieers RPM repository.

    On CentOS do the following to install the dependencies:

    # yum install httpd createrepo
    

    On RHEL 4 (or RHEL 3) server we need to download createrepo, python-urlgrabber, sqlite, python-sqlite, python-elementtree and yum packages from the Dag Wieers repository as they do not have these two packages in the official repository. To install these packages for RHEL4 following commands can be used.

    $ sudo up2date -i httpd
    $ wget http://dag.wieers.com/rpm/packages/python-urlgrabber/python-urlgrabber-2.9.7-1.2.el4.rf.noarch.rpm
    $ wget http://dag.wieers.com/rpm/packages/createrepo/createrepo-0.4.6-1.el4.rf.noarch.rpm
    $ wget http://dag.wieers.com/rpm/packages/yum/yum-2.4.2-0.4.el4.rf.noarch.rpm
    $  wget http://dag.wieers.com/rpm/packages/python-elementtree/python-elementtree-1.2.6-7.el4.rf.x86_64.rpm
    $ wget http://dag.wieers.com/rpm/packages/python-sqlite/python-sqlite-1.0.1-1.2.el4.rf.x86_64.rpm
    $ wget http://dag.wieers.com/rpm/packages/sqlite/sqlite-2.8.17-1.el4.rf.x86_64.rpm
    $ sudo rpm -Uvh *.rpm
    $ rm -f *.rpm 
    

    Client

    On the client machines which will be configured to use this repository we would need the following installed:

    • Yum (depends on python-elementtree, python-sqlite and sqlite)

    On CentOS, Fedora Core and RHEL 5 these should already be installed. On RHEL 4 and older they can be installed from Dag Wieers RPM repository. Although up2date from RHEL 4 update 2 onwards supports yum and apt repositories but still I have seen problems quite often and anyways, its abandones by Red Hat from RHEL 5 onwards.

    Following is how we can installed yum on a RHEL 4 (x86_64) machine:

    $ wget http://dag.wieers.com/rpm/packages/sqlite/sqlite-2.8.17-1.el4.rf.x86_64.rpm
    $ wget http://dag.wieers.com/rpm/packages/python-sqlite/python-sqlite-1.0.1-1.2.el4.rf.x86_64.rpm
    $ wget http://dag.wieers.com/rpm/packages/python-elementtree/python-elementtree-1.2.6-7.el4.rf.x86_64.rpm
    $ wget http://dag.wieers.com/rpm/packages/yum/yum-2.4.2-0.4.el4.rf.noarch.rpm
    $ sudo rpm -Uvh sqlite-2.8.17-1.el4.rf.x86_64.rpm python-sqlite-1.0.1-1.2.el4.rf.x86_64.rpm python-elementtree-1.2.6-7.el4.rf.x86_64.rpm yum-2.4.2-0.4.el4.rf.noarch.rpm
    

    Creating a Directory Tree

    I normally create local repositories over http protocol, so I have created the directory tree in the web root.

    $ mkdir /var/www/html/yum/CentOS/5/i386/os
    $ mkdir /var/www/html/yum/CentOS/5/i386/extras
    $ mkdir /var/www/html/yum/CentOS/5/i386/updates
    

    Next copy all the RPMs from the distribution CDs as follows:

    $ cd /var/www/html/yum/CentOS/5/i386/os
    $ for num_cd in 1 2 3 4 5 6;do read "Enter CD number $num_cd. Press any key when ready.." key_press;cp -ar /media/cdrom/* /var/www/html/yum/CentOS/5/i386/os/ && eject;done  
    

    In case you have the ISO files, you can mount them as follows and copy the files:

    $ mkdir temp_mnt
    $ for num in 1 2 3 4 5 6; do mount -o loop CentOS5-i386-disc$num.iso temp_mnt;cp -ra temp_mnt/* /var/www/html/yum/CentOS/5/i386/os/;umount temp_mnt;done
    

    Since we will be generating our own headers we need to remove the repodata directory from the copied tree. In CentOS 5 the repodata directory was in the root of the distribution CD:

    $ rm -rf /var/www/html/yum/CentOS/5/os/repodata
    

    Please note that in RHEL 5 CDs there are multiple repodata directories one per channel, they can be removed as follows:

    $ rm -rf /var/www/html/yum/RHEL/5/i386/os/Cluster/repodata
    $ rm -rf /var/www/html/yum/RHEL/5/i386/os/ClusterStorage/repodata
    $ rm -rf /var/www/html/yum/RHEL/5/i386/os/Server/repodata
    $ rm -rf /var/www/html/yum/RHEL/5/i386/os/VT/repodata
    

    The reason why we need to remove this repodata directory and generate our own is because we have copied this tree from CD and hence the base URL for all the RPMs in the headers is stored as media:.

    Creating The Base OS Repository

    createrepois a bourne shell script which in turn just calls the underlying python program called /usr/share/createrepo/genpkgmetadata.py. To create a yum repository of all the packages from base operating systems we need to execute the createrepo command in the directory which we want to be turned into a repository as follows:

    $ createrepo /var/www/html/yum/CentOS/5/i386/os/
    

    The createrepo command will create all the required headers in the repodata directory. Normally we won't add any new RPM file in the base distribution, so we won't need to generate these headers again. But if for some reason we copy a new RPM file in the base repository or replace an existing RPM file for various reasons, then we need to regenerate the headers with createrepo command.

    Configuring a client to use this base repository

    On the client machine, remove the information for existing standard distribution repositories. The various configuration for repositories will be present either in etc/yum.conf file or /etc/yum.repos.d directory. If its the later, remove all the files from the /etc/yum.repos.d directory and if its the former, delete all lines defining the default distribution repositories.

    The IP address for my local yum server is 192.168.23.43.

    The following is an example of CentOS yum client, change it appropriately for a Fedora Core or RHEL yum client. Usually the repositories can be mixed also, but that may in some cases lead to dependency problems.

    $ sudo rm -f /etc/yum.repos.d/*
    $ sudo vi /etc/yum.repos.d/centos-base.repo 
    [base]
    name=CentOS-$releasever - Base
    baseurl=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/
    gpgcheck=1
    gpgkey=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/RPM-GPG-KEY-CentOS-5
    
    #released updates 
    #[updates]
    #name=CentOS-$releasever - Updates
    #baseurl=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/updates/
    #gpgcheck=1
    #gpgkey=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/RPM-GPG-KEY-CentOS-5
    

    Don't bother about the second section containing the updates repository, we have not configured it yet and will be doing it shortly. This is the reason why the second repository has been commented out.

    Now to start using this new base repository, do the following:

    $ sudo yum clean all
    $ sudo yum update
    

    Now any package in the repository can be installed using ''yum install'' and so on.

    Setting up Updates and Extras Repository

    After the base repository has been setup, it can only be used for installing new packages, but what about the security fixes/patches released by the distribution providers for existing packages? We need an updates repository for applying such packages. There is no hard and fast rule that we need to have an updates repository. So, if you prefer to have only one repository containing an updated/patched version of RPM packages then you can do that as well. All you need to do is replace the concerned RPM file in the base repository with an patched/updated RPM file and then re-run the createrepocommand to re-generate the headers. After that a yum update on the client side will update the concerned RPM.

    However, its always advisable to keep the updated packages in a seperate repository for better management and not to touch the base repository after setup. This is the standard approach and this is what we are going to do next.

    To start with we need a source from where we can get the updated packages. Choosing one of the mirrors of the distribution in question and continuously syncing the updates from it is a good idea.

    I synchronise with my chosen mirrors for CentOS and Fedora Core as follows:

    $ rsync rsync://mirrors.kernel.org/centos/5/updates/i386/RPMS/*.rpm /var/www/html/yum/CentOS/5/i386/updates/
    $ rsync rsync://mirrors.kernel.org/centos/5/addons/i386/RPMS/*.rpm /var/www/html/yum/CentOS/5/i386/extras/
    $ rsync rsync://mirrors.kernel.org/centos/5/centosplus/i386/RPMS/*.rpm /var/www/html/yum/CentOS/5/i386/extras/
    $ rsync rsync://mirrors.kernel.org/centos/5/extras/i386/RPMS/*.rpm /var/www/html/yum/CentOS/5/i386/extras/
    $ rsync rsync://mirrors.kernel.org/centos/5/fasttrack/i386/RPMS/*.rpm /var/www/html/yum/CentOS/5/i386/extras/
    

    This rsync job can be run as a periodic cron job. The frequency depends on how critical updates are for you. Usually once a day is good enough. After initial sync we can convert this collection of RPMs into a repository by using createrepo utility.

    $ createrepo /var/www/html/yum/CentOS/5/i386/updates/
    $ createrepo /var/www/html/yum/CentOS/5/i386/extras/
    

    Now we have the updates repository ready, so we can uncomment the commented out section in /etc/yum.repos.d/centos-base.repo file so that it now looks as follows:

    $ sudo vi /etc/yum.repos.d/centos-base.repo
    [base]
    name=CentOS-$releasever - Base
    baseurl=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/
    gpgcheck=1
    gpgkey=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/RPM-GPG-KEY-CentOS-5
    
    # released updates
    [updates]
    name=CentOS-$releasever - Updates
    baseurl=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/updates/
    gpgcheck=1
    gpgkey=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/RPM-GPG-KEY-CentOS-5
    
    # Other CentOS repositories
    [extras]
    name=CentOS-$releasever - Extras
    baseurl=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/extras/
    gpgcheck=1
    gpgkey=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/RPM-GPG-KEY-CentOS-5
    

    In some cases we have found that the company firewall is blocking all outgoing connections and the only available connections are http and that too using a proxy server. In such cases, we have found that a program called lftp is extremely helpful. RPM packages of lftp are available from here Dag Wieers repository. After installing this we can create a cron job from the following quick script:

    $ vi ~/bin/lftp_centos_repo_sync.sh
    #!/bin/sh
    export http_proxy=http://localproxy.local:8080
    export HTTP_PROXY=$http_proxy
    cd /var/www/html/yum/CentOS/5/i386/updates
    lftp -c mget "http://mirrors.kernel.org/centos/5/updates/i386/RPMS/*.rpm"
    createrepo /var/www/html/yum/CentOS/5/i386/updates
    cd /var/www/html/yum/CentOS/5/i386/extras
    lftp -c mget "http://mirrors.kernel.org/centos/5/fasttrack/i386/RPMS/*.rpm"
    lftp -c mget "http://mirrors.kernel.org/centos/5/addons/i386/RPMS/*.rpm"
    lftp -c mget "http://mirrors.kernel.org/centos/5/centosplus/i386/RPMS/*.rpm"
    lftp -c mget "http://mirrors.kernel.org/centos/5/extras/i386/RPMS/*.rpm"
    createrepo /var/www/html/yum/CentOS/5/i386/extras
    $ chmod +x ~/bin/lftp_centos_repo_sync.sh
    

    Restricting a repository for a particular set of packages

    Sometimes it is desirable to add a repository in local yum configuration, but restrict it to a certain number of packages only. This can be done by adding includepkgs line in the repository config as follows:

    $ cat /etc/yum.repos.d/dag-wieers.repo
    [dag]
    name=Dag-CentOS-$releasever
    baseurl=http://192.168.23.43/yum/RHEL/$releasever/$ARCH/dag/
    gpgcheck=1
    gpgkey=http://192.168.23.43/yum/RHEL/$releasever/$ARCH/dag/RPM-GPG-KEY.dag.txt
    includepkgs=clamav clamav-devel clamav-db unrar
    

    This will make sure that only listed packages are downloaded/updated from the local dag repository.

  • Monitoring Lotus Notes/Domino Servers

    Very recently I was asked to setup Nagios to monitor the Lotus Notes/Domino Servers. There were some around 500 plus servers across the globe. It was an all Windows shop and the current monitoring was being done using GSX, HP Systems Insight Manager and IBM Director. The client wanted a comprehensive solution so that they have a single monitoring interface to look at and after an initial discussion they decided to go ahead with Nagios.

    This document looks at monitoring Lotus Notes/Domino servers using SNMP through Nagios. I have provided some of the required OIDs and their initial warning and critical threshold values in tabular format. There are many more interesting OIDs listed in the domino.mib file. Also I have attached the Nagios commands definition file and service definition files at the end of the document. In order to use certain checks, some plugins are required which can be downloaded from http://www.barbich.net/websvn/wsvn/nagios/nagios/plugins/check_lotus_state.pl.

    Note – I recently found that the required plugins are not available on the original site anymore, so I have made my copy available with this document. You can download the scripts from the link at the bottom of the document.

    To start with I asked the windows administrators to install the Lotus/Domino SNMP Agent on all servers and after that I got hold of a copy of domino.mib file which is located in C:\system32.

    Next I listed all the interesting parameters from the domino.mob file and started querying a set of test servers to find out if a value is being returned or not. Following is the OID list and what each OID means. Most of these checks are only valid in the Active node. This is important to know if the Domino servers are in a HA cluster (active-standby pair). If there is only one Domino Server then these checks will apply.

    Moinitoring Checks on Active Node

    Monitoring Checks on Active Node
    Nagios Service CheckOIDDescriptionThreshholds (w- warning, c-critical)
    dead-mailenterprises.334.72.1.1.4.1.0Number of dead (undeliverable) mail messagesw 80, c 100
    routing-failuresenterprises.334.72.1.1.4.3.0Total number of routing failures since the server startedw 100, c 150
    pending-routingenterprises.334.72.1.1.4.6.0Number of mail messages waiting to be routedw10, c 20
    pending-localenterprises.334.72.1.1.4.7.0Number of pending mail messages awaiting local deliveryw 10, c 20
    average-hopsenterprises.334.72.1.1.4.10.0Average number of server hops for mail deliveryw 10, c 15
    max-mail-delivery-timeenterprises.334.72.1.1.4.12.0Maximum time for mail delivery in secondsw 300, c@600
    router-unable-to-transferenterprises.334.72.1.1.4.19.0Number of mail messages the router was unable to transferw 80, c100
    mail-held-in-queueenterprises.334.72.1.1.4.21.0Number of mail messages in message queue on holdw 80, c 100
    mails-pendingenterprises.334.72.1.1.4.31.0Number of mail messages pendingw@80, c@100
    mailbox-dns-pendingenterprises.334.72.1.1.4.34.0Number of mail messages in MAIL.BOX waiting for DNSw 10, c 20
    databases-in-cacheenterprises.334.72.1.1.10.15.0The number of databases currently in the cache. Administrators should monitor this number to see whether it approaches the NSF_DBCACHE_MAXENTRIES setting. If it does, this indicates the cache is under pressure. If this situation occurs frequently, the administrator should increase the setting for NSF_DBCACHE_MAXENTRIESw 80, c 100
    database-cache-hitsenterprises.334.72.1.1.10.17.0The number of times an lnDBCacheInitialDbOpen is satisfied by finding a database in the cache. A high ‘hits-to-opens’ ratio indicates the database cache is working effectively, since most users are opening databases in the cache without having to wait for the usual time required by an initial (non-cache) open. If the ratio is low (in other words, more users are having to wait for databases not in the cache to open), the administrator can increase the NSF_DBCACHE_MAXENTRIESw, c
    database-cache-overcrowdingenterprises.334.72.1.1.10.21.0The number of times a database is not placed into the cache when it is closed because lnDBCacheCurrentEntries equals or exceeds lnDBCacheMaxEntries*1.5. This number should stay low. If it begins to rise, you should increase the NSF_DbCache_Maxentries settingsw 10, c 20
    replicator-statusenterprises.334.72.1.1.6.1.3.0Status of the Replicator task
    router-statusenterprises.334.72.1.1.6.1.4.0Status of the Router task
    replication-failedenterprises.334.72.1.1.5.4.0Number of replications that generated an error
    server-availability-indexenterprises.334.72.1.1.6.3.19.0Current percentage index of server’s availability. Value range is 0-100. Zero (0) indicates no available resources; a value of 100 indicates server completely available

    Interesting OIDs to plot for trend analysis

    Interesting OIDs to plot for Trend Analysis
    enterprises.334.72.1.1.4.2.0Number of messges received by router
    enterprises.334.72.1.1.4.4.0Total number of mail messages routed since the server started
    enterprises.334.72.1.1.4.5.0Number of messages router attempted to transfer
    enterprises.334.72.1.1.4.8.0Notes server’s mail domain
    enterprises.334.72.1.1.4.11.0Average size of mail messages delivered in bytes
    enterprises.334.72.1.1.4.13.0Maximum number of server hops for mail delivery
    enterprises.334.72.1.1.4.14.0Maximum size of mail delivered in bytes
    enterprises.334.72.1.1.4.15.0Minimum time for mail delivery in seconds
    enterprises.334.72.1.1.4.16.0Minimum number of server hops for mail delivery
    enterprises.334.72.1.1.4.17.0Minimum size of mail delivered in bytes
    enterprises.334.72.1.1.4.18.0Total mail transferred in kilobytes
    enterprises.334.72.1.1.4.20.0Count of actual mail items delivered (may be different from delivered which counts individual messages)
    enterprises.334.72.1.1.4.26.0Peak transfer rate
    enterprises.334.72.1.1.4.27.0Peak number of messages transferred
    enterprises.334.72.1.1.4.32.0Number of mail messages moved from MAIL.BOX via SMTP
    cache cmd hit rateenterprises.334.72.1.1.15.1.24.0
    cache db hit rateenterprises.334.72.1.1.15.1.26.0
    hourly access denialsenterprises.334.72.1.1.11.6.0
    req per 5 minenterprises.334.72.1.1.15.1.13.0
    unsuccesfull runenterprises.334.72.1.1.11.9.0

    Files and Scripts

  • Apache LDAP Authentication

    mod_auth_ldap modules allows an LDAP directory to be used to store the database for HTTP Basic authentication. This document describes an example implementation in Red Hat Enterprise Linux 4. This document also applies to any Linux distribution in general, provided the mod_auth_ldap module is loaded.

    I have used Microsoft Active Directory as my LDAP server as that’s what I had at the time of writing this. But any LDAP Server will do for this.

    Setting up webserver

    On Red Hat Enterprise Linux 4, when the httpd package is installed, mod_auth_ldap gets installed with it. By default Red Hat Enterprise 4 httpd.conf file does not allow the overriding of any setting by the .htaccess file. Following are the default settings:

    <Directory />
        Options FollowSymLinks
        AllowOverride None
    </Directory>
    
    <Directory "/var/www/html">
        Options Indexes FollowSymLinks
        AllowOverride None
        .......
        .......
    </Directory>
    

    I changed the settings for /var/www/html to:

    <Directory "/var/www/html">
        Options Indexes FollowSymLinks
        AllowOverride AuthConfig
        .......
        .......
    </Directory>
    

    This enabled me to put the required authentication directives in the .htaccess files. You need to have administrative access to the web server or get this done from your administrator.

    Next we need to find whether the mod_auth_ldap module is being loaded or not This can be done as follows on RHEL4

    $ grep mod_auth_ldap /etc/httpd/conf/httpd.conf
    LoadModule auth_ldap_module modules/mod_auth_ldap.so
    

    Test Setup

    I have created a directory test_auth in the DocumentRoot which I want to have restrictive access using ldap authentication. Following commands will create the required directory and an index.html file in it.

    $ sudo mkdir /var/www/html/test_auth
    $ sudo cat >>/var/www/html/test_auth/index.html <<__EOF__
    <html>
    <head><title>Test page</title></head>
    <body><h1>Test page</h1><p>Hello World!</p></body>
    </html>
    __EOF__
    

    Now we can create an .htaccess file containing the required authentication directives:

    $ sudo vi /var/www/html/test_auth/.htaccess
    AuthType Basic
    AuthName "Restricted Access"
    AuthLDAPEnabled on
    AuthLDAPURL 'ldap://msadc01.unixclinic.net:389/ou=Users and Machines,ou=IN,dc=unixclinic,dc=net?sAMAccountName?sub?(memberOf=cn=Infrastructure Team,ou=Groups,ou=Users and Machines,ou=IN,dc=unixclinic,dc=net)'
    AuthLDAPBindDN "apache_ldap_query@unixclinic.net"
    AuthLDAPBindPassword pA554Auth
    require valid-user
    

    The AuthLDAPURL specifies the LDAP server, the base DN, the attribute to use in the search, as well as the extra search filter to use and is a single line without any line breaks. The URL specifies that the access is restricted to the members of the “Infrastructure Team”. AuthLDAPBindDN is an optional DN (Distinguished Name) to use in binding to the LDAP server. If this is not specified then mod_auth_ldap will use anonymous bind. Most professionally setup LDAP Servers (and Active Directory Servers) does not allow anonymous binds against the directory.

    Resources

  • Using SNAT for Highly Available Services

    Problem

    Often network based services are restricted to a particular source IP address. A common example is SNMP. A good system/network administrator will restrict access to the SNMP daemon from a particular host, usually a central management server. Sometimes these central management servers are HA pair. Under these circumstances, a service address can be used for the active node. This service address has access to the desired networking resource. Heartbeat usually will start this service IP address as a resource on the active node. This will result in the Active node taking over the IP address, which enables the node to listen on that IP address for incoming requests. But this still does not solve the problem of active node attempting to access a network resource, because all packets originating from this node will bear the primary IP address of this node and not the secondary address(es) or aliased address(es).

    Solution

    For such cases, SNAT (Source Network Address Translation) can be useful. Using SNAT we can ask the kernel to change the source IP addresses on all outgoing packets. But the IP address which we want on our packets must be present either as a primary or secondary or aliased IP address. This can be checked as:

    # ip addr show bond0
    6: bond0:  mtu 1500 qdisc noqueue
        link/ether 00:18:fe:89:df:d8 brd ff:ff:ff:ff:ff:ff
        inet 192.168.1.3/16 brd 172.16.255.255 scope global bond0
        inet 192.168.1.2/16 brd 172.16.255.255 scope global secondary bond0:0
        inet 192.168.1.1/16 brd 172.16.255.255 scope global secondary bond0:1
        inet6 fe80::218:feff:fe89:dfd8/64 scope link
           valid_lft forever preferred_lft forever

    or

    # ifconfig bond0
    bond0     Link encap:Ethernet  HWaddr 00:18:FE:89:DF:D8
              inet addr:192.168.1.3  Bcast:172.16.255.255  Mask:255.255.0.0
              inet6 addr: fe80::218:feff:fe89:dfd8/64 Scope:Link
              UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
              RX packets:53589964 errors:0 dropped:0 overruns:0 frame:0
              TX packets:25857501 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:40502210697 (37.7 GiB)  TX bytes:4148482317 (3.8 GiB)

    Instead of specifying a interface, all interfaces can also be viewed using:

    # ip addr show
    1: lo:  mtu 16436 qdisc noqueue
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eth0:  mtu 1500 qdisc pfifo_fast master bond0 qlen 1000
        link/ether 00:18:fe:89:df:d8 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::218:feff:fe89:dfd8/64 scope link
           valid_lft forever preferred_lft forever
    3: eth1:  mtu 1500 qdisc pfifo_fast master bond0 qlen 1000
        link/ether 00:18:fe:89:df:d8 brd ff:ff:ff:ff:ff:ff
    4: sit0:  mtu 1480 qdisc noop
        link/sit 0.0.0.0 brd 0.0.0.0
    6: bond0:  mtu 1500 qdisc noqueue
        link/ether 00:18:fe:89:df:d8 brd ff:ff:ff:ff:ff:ff
        inet 192.168.1.3/16 brd 172.16.255.255 scope global bond0
        inet 192.168.1.2/16 brd 172.16.255.255 scope global secondary bond0:0
        inet 192.168.1.1/16 brd 172.16.255.255 scope global secondary bond0:1
        inet6 fe80::218:feff:fe89:dfd8/64 scope link
           valid_lft forever preferred_lft forever

    or

    # ifconfig
    bond0     Link encap:Ethernet  HWaddr 00:18:FE:89:DF:D8
              inet addr:192.168.1.3  Bcast:172.16.255.255  Mask:255.255.0.0
              inet6 addr: fe80::218:feff:fe89:dfd8/64 Scope:Link
              UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
              RX packets:53587551 errors:0 dropped:0 overruns:0 frame:0
              TX packets:25855600 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:40501872867 (37.7 GiB)  TX bytes:4148267377 (3.8 GiB)
    
    bond0:0   Link encap:Ethernet  HWaddr 00:18:FE:89:DF:D8
              inet addr:192.168.1.2  Bcast:172.16.255.255  Mask:255.255.0.0
              UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
    
    bond0:1   Link encap:Ethernet  HWaddr 00:18:FE:89:DF:D8
              inet addr:192.168.1.1  Bcast:172.16.255.255  Mask:255.255.0.0
              UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
    
    eth0      Link encap:Ethernet  HWaddr 00:18:FE:89:DF:D8
              inet6 addr: fe80::218:feff:fe89:dfd8/64 Scope:Link
              UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
              RX packets:53587551 errors:0 dropped:0 overruns:0 frame:0
              TX packets:25855600 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:40501872867 (37.7 GiB)  TX bytes:4148267377 (3.8 GiB)
              Interrupt:185
    
    eth1      Link encap:Ethernet  HWaddr 00:18:FE:89:DF:D8
              UP BROADCAST SLAVE MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
              Interrupt:193
    
    lo        Link encap:Local Loopback
              inet addr:127.0.0.1  Mask:255.0.0.0
              inet6 addr: ::1/128 Scope:Host
              UP LOOPBACK RUNNING  MTU:16436  Metric:1
              RX packets:536101 errors:0 dropped:0 overruns:0 frame:0
              TX packets:536101 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:59243777 (56.4 MiB)  TX bytes:59243777 (56.4 MiB)

    My NICs are bonded and hence bond0 is the interface I use.

    Setting Up SNAT

    In Linux IPTables can be used to setup SNAT.
    To change the source IP address of all packets going out of the box to anywhere, following rule can be used:

    $ sudo /sbin/iptables -t nat -A POSTROUTING -o bond0 -j SNAT --to-source 192.168.1.1

    The result can be seen as follows:

    $ sudo /sbin/iptables -t nat -L
    Chain PREROUTING (policy ACCEPT)
    target     prot opt source               destination
    
    Chain POSTROUTING (policy ACCEPT)
    target     prot opt source               destination
    SNAT       all  --  anywhere             anywhere            to:192.168.1.1
    
    Chain OUTPUT (policy ACCEPT)
    target     prot opt source               destination

    I normally restrict SNAT to selected services and destination IP addresses only. The following three IPTables command respectively translates the source address for all packets destined for 10.199.65.191 to 192.168.1.1, only ICMP packets destined for 192.168.2.4 and all packets destined for network 192.168.1.0/24:

    $ sudo /sbin/iptables -t nat -A POSTROUTING -d 10.199.65.191 -o bond0 -j SNAT --to-source 192.168.1.1
    $ sudo /sbin/iptables -t nat -A POSTROUTING -d 192.168.2.4 -p ICMP -o bond0 -j SNAT --to-source 192.168.1.1
    $ sudo /sbin/iptables -t nat -A POSTROUTING -d 192.168.1.0/24 -o bond0 -j SNAT --to-source 192.168.1.1

    The result of all these commands can be seen as:

    $ sudo /sbin/iptables -t nat -L
    Chain PREROUTING (policy ACCEPT)
    target     prot opt source               destination
    
    Chain POSTROUTING (policy ACCEPT)
    target     prot opt source               destination
    SNAT       all  --  anywhere             anywhere            to:192.168.1.1
    SNAT       all  --  anywhere             10.199.65.191       to:192.168.1.1
    SNAT       icmp --  anywhere             192.168.2.4         to:192.168.1.1
    SNAT       all  --  anywhere             192.168.1..0/24     to:192.168.1.1
    
    Chain OUTPUT (policy ACCEPT)
    target     prot opt source               destination

    Setting Heartbeat and IPTables for SNAT

    The /etc/ha.d/haresources file in heartbeat can be set to accept the desired IP address as a resource and associate it with a script which can start/stop/restart these IPTables rules.

    $ sudo vi /etc/ha.d/haresources
    node01 192.168.1.1 iptables

    Red Hat and Fedora has such a script and which is located in /etc/init.d/iptables. This script reads a file /etc/sysconfig/iptables, which contains various rules in iptables-save format. I had created a similar script for Debian and derivatives distributions which reads the rules from /etc/iptables file. The script is given below:

    #! /bin/sh
    # Script      - iptables
    # Description - Read IPTables rule from a file in iptables-save format.
    # Author      - Ajitabh Pandey 
    #
    PATH=/usr/sbin:/usr/bin:/sbin:/bin
    DESC="IPTables Configuration Script"
    NAME=iptables
    DAEMON=/sbin/$NAME
    SCRIPTNAME=/etc/init.d/$NAME
    
    # Exit if the package is not installed
    [ -x "$DAEMON" ] || exit 0
    
    # Load the VERBOSE setting and other rcS variables
    [ -f /etc/default/rcS ] && . /etc/default/rcS
    
    if [ ! -e /etc/iptables ]
    then
            echo "no valid iptables config file found!"
            exit 1
    fi
    
    case "$1" in
      start)
            echo "Starting $DESC:" "$NAME"
            /sbin/iptables-restore /etc/iptables
            ;;
      stop)
            echo "Stopping $DESC:" "$NAME"
            $DAEMON -F -t nat
            $DAEMON -F
            ;;
      restart|force-reload)
            echo "Restarting $DESC:" "$NAME"
            $DAEMON -F -t nat
            $DAEMON -F
            /sbin/iptables-restore /etc/iptables
            ;;
      *)
            echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload}" >&2
            exit 3
            ;;
    esac

    Following is a sample iptables rules file, in iptables-save format.

    *nat
    :PREROUTING ACCEPT [53:8294]
    :POSTROUTING ACCEPT [55:11107]
    :OUTPUT ACCEPT [55:11107]
    
    # Allow all ICMP packets to be SNATed
    -A POSTROUTING  -p ICMP -o bond0 -j SNAT --to-source 192.168.0.1
    
    # Allow packets destined for SNMP port (161) on local network to be SNATed
    -A POSTROUTING -d 192.168.0.0/24 -p tcp -m tcp --dport snmp -o bond0 -j SNAT --to-source 192.168.0.1
    -A POSTROUTING -d 192.168.0.0/24 -p udp -m udp --dport snmp -o bond0 -j SNAT --to-source 192.168.0.1
    
    # These are for the time servers on internet
    -A POSTROUTING -p tcp -m tcp --dport ntp -o bond0 -j SNAT --to-source 192.168.0.1
    COMMIT
    
    *filter
    :INPUT ACCEPT [0:0]
    :FORWARD ACCEPT [0:0]
    :OUTPUT ACCEPT [144:12748]
    COMMIT
  • Migrating Users in Linux

    When a server is to be replaced, often it is a requirement to migrate all user accounts as it is to the new server, which means that the password on these accounts also should not change.

    Before the account migration a brief freeze should be imposed on the server. This means no new accounts are to be created till the migration is completed and tested.

    To do the migration I used the “pwunconv” utility and merged the passwd file and shadow file on the source server, then copied it across to the new server.

    On the new server also I ran the “pwunconv” utility to merge the passwd and shadow files and then appended the file copied from the old server to it.
    On old server

    sysadmin@old-server:$ sudo /usr/sbin/pwunconv
    sysadmin@old-server:$ cp /etc/passwd newpasswd
    sysadmin@old-server:$ scp newpasswd new-server:.
    

    On new server

    • Removing the system accounts of the old-server as the new-server already has its own system accounts.
      ajitabhp@new-server:$ vi newpasswd
      ......
      [remove the system accounts]
      
    • Merge the passwd and shadow files and then append the newpasswd to the /etc/passwd file
      ajitabhp@new-server:$ sudo /usr/sbin/pwunconv
      ajitabhp@new-server:$ sudo cat newpasswd >>/etc/passwd
      
    • Change the shell of all users who have /sbin/nologin to /bin/false. This step was required as Debian does not have /sbin/nologin shell, instead it has /bin/false.
      ajitabhp@new-server:$ sudo sed -i 's/\/sbin\/nologin/\/bin\/false/' /etc/passwd
      
    • Finally split the /etc/passwd file to /etc/passwd and /etc/shadow files and do a syntax check and then sort the entries on the basis of UIDs
      ajitabhp@new-server:$ sudo /usr/sbin/pwconv
      ajitabhp@new-server:$ sudo /usr/sbin/pwck
      ajitabhp@new-server:$ sudo /usr/sbin/pwck -s
      

    The syntax check told me that the home directories for all the accounts which I migrated from old-server does not exist. So, I ran this one liner to automatically generate the home directories from the /etc/passwd file, if they dont already exists:

    ajitabhp@new-server:~$ grep "/home" /etc/passwd|cut -d: -f1,6|sed -e 's/:/ /'|while read user directory;do if [ ! -d $directory ]; then sudo mkdir $directory;sudo chown $user:users $directory;sudo chmod 755 $directory;fi;done
    

    Another quick run on /usr/sbin/pwck gave the following:

    ajitabhp@new-server:~$ sudo /usr/sbin/pwck
    user news: directory /var/spool/news does not exist
    user uucp: directory /var/spool/uucp does not exist
    user www-data: directory /var/www does not exist
    user list: directory /var/list does not exist
    user irc: directory /var/run/ircd does not exist
    user gnats: directory /var/lib/gnats does not exist
    user nobody: directory /nonexistent does not exist
    pwck: no changes
    

    This is fine as these are all system accounts.