The available timezones in solaris are listed in /usr/share/lib/zoneinfo/
To find out the current timezone on the system:
grep '^TZ' /etc/TIMEZONE
and in order to change the timezone, modify the TZ line in /etc/TIMEZONE and then reboot the system.
Exploring systems, souls, and stories – one post at a time

The available timezones in solaris are listed in /usr/share/lib/zoneinfo/
To find out the current timezone on the system:
grep '^TZ' /etc/TIMEZONE
and in order to change the timezone, modify the TZ line in /etc/TIMEZONE and then reboot the system.
Use ifconfig to change the IP address immediately
ifconfig
If the new IP address calls for a different gateway then change it using the route command:
route add defaultroute del default
Change the hosts’s IP address in
Change the host’s subnet mask in /etc/netmask
Change the host’s default gateway in /etc/defaultrouter
Sometime back I came across VirtualBox, an excellent replacement for commercial Vmware. I installed few OSes in VirtualBox and found the performance pretty good. In my opinion the performance of Windows virtual machines is much better in VirtualBox, perhaps because of the installed guest additions.
Immediately a thought came into my mind. What if I want to create an internetwork of VirtualBox virtual machines and QEMU virtual machines.
The default NAT networking in VirtualBox does not solve any purpose. The only useful networking option I found was the “Host Interface”. I went through several documents on VirtualBox website and realise that I needed to use bridging in order to do this. All the documents explained beautifully how to accompalish bridging with the ethernet interface on the host machine, but this is not what I wanted. I wanted the virtual machines to be in an isolated network connected together using VDE Switch as I was doing with QEMU. Searching on google for quite sometime did not help either, so I decided to give it a try.
Following is what I did to accompalish this. After executing the following –
As suggested on the VirtualBox website, in order to use bridging I need the uml-utilities and bridge-utilities packages:
$ sudo apt-get install uml-utilities bridge-utils
I need to load the tun/tap module and the kqemu module for QEMU
$ sudo modprobe kqemu $ sudo modprobe tun
Next I wanted to make the “tun” device writeable by the users, so I out myself into the “vboxusers” group and gave the group write permission on the device.
$ sudo chgrp vboxusers /dev/net/tun $ sudo chmod g+rw /dev/net/tun
Then I created couple of persistent tap interfaces owned by an ordinary user:
$ sudo tunctl -t tap0 -u ajitabhp $ sudo tunctl -t tap1 -u ajitabhp
Next I started the VDE Switch connected to the tap0 interface and make the control file world writeable:
$ vde_switch -tap tap0 -daemon $ chmod 666 /tmp/vde.ctl
A bridge then needs to be created:
$ sudo brctl addbr br0
All interfaces which are connected to network segments to be bridged are to be put into promiscuous mode.
$ sudo ifconfig tap0 0.0.0.0 promisc
Add the tap0 interface to the bridge and start the DNSMASQ server to listen on br0 interface.
$ sudo brctl addif br0 tap0 $ sudo /usr/sbin/dnsmasq --log-queries --user=named --dhcp-leasefile=/var/lib/misc/vbox-dhcpd.leases --dhcp-range=10.111.111.100,10.111.111.199,255.255.255.0,10.255.255.255,8h --interface=br0 --domain=virtual.lan --addn-hosts=/etc/my-host
Next configure the br0 interface with an ip address. This br0 interface will work as the gateway for the virtual machines.
$ sudo ifconfig br0 10.111.111.254 netmask 255.255.255.0 up
Now add the tap1 interface to the bridge and bring it up.
$ sudo brctl addif br0 tap1 $ sudo ifconfig tap1 up
Enable IP forwarding on the host machine and setup MASQUERADING:
$ sudo su - # echo "1" > /proc/sys/net/ipv4/ip_forward $ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
Following is how various interfaces look like after finishing this setup.
$ brctl show br0
bridge name bridge id STP enabled interfaces
br0 8000.ae7729b64a80 no tap0
tap1
$ ifconfig
br0 Link encap:Ethernet HWaddr 6E:EA:39:83:8C:D4
inet addr:10.111.111.254 Bcast:10.111.111.255 Mask:255.255.255.0
inet6 addr: fe80::6cea:39ff:fe83:8cd4/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:106 errors:0 dropped:0 overruns:0 frame:0
TX packets:82 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:10696 (10.4 KiB) TX bytes:10165 (9.9 KiB)
eth0 Link encap:Ethernet HWaddr 00:12:F0:28:6E:C3
inet addr:192.168.0.176 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::212:f0ff:fe28:6ec3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:426 errors:32 dropped:32 overruns:0 frame:0
TX packets:327 errors:0 dropped:0 overruns:0 carrier:1
collisions:0 txqueuelen:1000
RX bytes:356289115 (339.7 MiB) TX bytes:18085137 (17.2 MiB)
Interrupt:11 Base address:0xa000 Memory:d0200000-d0200fff
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:7425 errors:0 dropped:0 overruns:0 frame:0
TX packets:7425 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:610482 (596.1 KiB) TX bytes:610482 (596.1 KiB)
tap0 Link encap:Ethernet HWaddr 6E:EA:39:83:8C:D4
inet6 addr: fe80::6cea:39ff:fe83:8cd4/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:81 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:10808 (10.5 KiB)
tap1 Link encap:Ethernet HWaddr B2:03:C1:BE:1E:4E
inet6 addr: fe80::b003:c1ff:febe:1e4e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:106 errors:0 dropped:0 overruns:0 frame:0
TX packets:46 errors:0 dropped:6 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:12180 (11.8 KiB) TX bytes:4978 (4.8 KiB)
$ sudo iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE 0 -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Now start the QEMU virtual machines:
$ vdeqemu -net vde,vlan=0 -net nic,vlan=0,macaddr=52:54:00:00:EE:01 -m 256 -hda ~/qemu/netbsd.img 1>~/logs/qemu-logs/netbsd.log 2>~/logs/qemu-logs/netbsd.error &
In VirtualBox attach the network interface of all virtual machines to “Host Interface” and then select tap1 as the interface name.
Yum (Yello Dog Updater Modified) is a popular high level package management tool for RPM (Red Hat Package Manager) packages. RPM packages are used by several popular distributions based on Red Hat Linux. Some of these distributions are Red Hat Enterpriose Linux (RHEL), CentOS, Fedora Core, SuSe, OpenSuSe, Mandrivia etc. CentOS is built from Red Hat Enterprise sources and hence can share the packages with RHEL and Fedora Core. SuSe and Mandrake RPMs have a slightly different format, as a result they are somewhat incompatible with the rest of the distributions.
Irrespective of the differences in the RPM file format, each of these distributions is capable of maintaining a yum repository for package management. In my experience, infrastructures based on Red Hat will have a mix of RHEL, Fedora Core and CentOS servers and the RPMs can be shared among these versions with slight care of keeping the dependencies in mind.
With the release of Red Hat Enterprise 5, Red Hat has formally dumped the ancient package manager up2date and have started using yum as the official package manager. Fedora and CentOS were using yum anyway from quite long. Even with yum there is a difference in the repository structure between older and newer versions. The newer tool to maintain the repository metadata is called as ‘createrepo‘ and the repository it creates is repomod. For creating and maintaining old style yum repository a tool called yum-arch is used. yum-arch is now depricated. Following table briefly presents the tool and the type of repository supported by it:
| Yum Tools | ||
|---|---|---|
| Tool | Package | Provides |
| genbase | apt | apt support |
| yum-arch | yum | yum support |
| createrepo | createrepo | repomod support (new yum and new apt) |
To the best of my knowledge, the up2date program which comes with the RHEL 4 Update 5 does not support the newer yum repositories created using createrepo. So if you have such servers in your setup then you are better off also creating the old style repo using the yum-arch command and also the new style repo using the createrepo command. Both the headers can co-exists simultaneously. The new headers are stored by createrepo in a directory called repodata whereas the old headers are stored by yum-arch in a directory called headers.
I have had opportunity to set up a local yum repositories at my work place and also for several clients.
This document describes a repository server on CentOS 5 for CentOS 5. In between I have also mentioned some points worth noting if you have RHEL 4 or older servers in your setup or you wanted to create an RHEL 4 server as a repository server.
Its not necessary to have a CentOS, Fedora Core or RHEL server as a repository server. I have very recently setup a Debian Etch server to do the same task. Starting from Debian Etch and Ubuntu Fiesty “createrepo” command is available as a Deb package.
In order to create a Yum repository server for your organisation following software is required:
createrepo package which is available with CentOS, Fedora Core and RHEL 5. For RHEL 4 this needs to be installed from external sources. One good source is Dag Wieers RPM repository.On CentOS do the following to install the dependencies:
# yum install httpd createrepo
On RHEL 4 (or RHEL 3) server we need to download createrepo, python-urlgrabber, sqlite, python-sqlite, python-elementtree and yum packages from the Dag Wieers repository as they do not have these two packages in the official repository. To install these packages for RHEL4 following commands can be used.
$ sudo up2date -i httpd $ wget http://dag.wieers.com/rpm/packages/python-urlgrabber/python-urlgrabber-2.9.7-1.2.el4.rf.noarch.rpm $ wget http://dag.wieers.com/rpm/packages/createrepo/createrepo-0.4.6-1.el4.rf.noarch.rpm $ wget http://dag.wieers.com/rpm/packages/yum/yum-2.4.2-0.4.el4.rf.noarch.rpm $ wget http://dag.wieers.com/rpm/packages/python-elementtree/python-elementtree-1.2.6-7.el4.rf.x86_64.rpm $ wget http://dag.wieers.com/rpm/packages/python-sqlite/python-sqlite-1.0.1-1.2.el4.rf.x86_64.rpm $ wget http://dag.wieers.com/rpm/packages/sqlite/sqlite-2.8.17-1.el4.rf.x86_64.rpm $ sudo rpm -Uvh *.rpm $ rm -f *.rpm
On the client machines which will be configured to use this repository we would need the following installed:
python-elementtree, python-sqlite and sqlite)On CentOS, Fedora Core and RHEL 5 these should already be installed. On RHEL 4 and older they can be installed from Dag Wieers RPM repository. Although up2date from RHEL 4 update 2 onwards supports yum and apt repositories but still I have seen problems quite often and anyways, its abandones by Red Hat from RHEL 5 onwards.
Following is how we can installed yum on a RHEL 4 (x86_64) machine:
$ wget http://dag.wieers.com/rpm/packages/sqlite/sqlite-2.8.17-1.el4.rf.x86_64.rpm $ wget http://dag.wieers.com/rpm/packages/python-sqlite/python-sqlite-1.0.1-1.2.el4.rf.x86_64.rpm $ wget http://dag.wieers.com/rpm/packages/python-elementtree/python-elementtree-1.2.6-7.el4.rf.x86_64.rpm $ wget http://dag.wieers.com/rpm/packages/yum/yum-2.4.2-0.4.el4.rf.noarch.rpm $ sudo rpm -Uvh sqlite-2.8.17-1.el4.rf.x86_64.rpm python-sqlite-1.0.1-1.2.el4.rf.x86_64.rpm python-elementtree-1.2.6-7.el4.rf.x86_64.rpm yum-2.4.2-0.4.el4.rf.noarch.rpm
I normally create local repositories over http protocol, so I have created the directory tree in the web root.
$ mkdir /var/www/html/yum/CentOS/5/i386/os $ mkdir /var/www/html/yum/CentOS/5/i386/extras $ mkdir /var/www/html/yum/CentOS/5/i386/updates
Next copy all the RPMs from the distribution CDs as follows:
$ cd /var/www/html/yum/CentOS/5/i386/os $ for num_cd in 1 2 3 4 5 6;do read "Enter CD number $num_cd. Press any key when ready.." key_press;cp -ar /media/cdrom/* /var/www/html/yum/CentOS/5/i386/os/ && eject;done
In case you have the ISO files, you can mount them as follows and copy the files:
$ mkdir temp_mnt $ for num in 1 2 3 4 5 6; do mount -o loop CentOS5-i386-disc$num.iso temp_mnt;cp -ra temp_mnt/* /var/www/html/yum/CentOS/5/i386/os/;umount temp_mnt;done
Since we will be generating our own headers we need to remove the repodata directory from the copied tree. In CentOS 5 the repodata directory was in the root of the distribution CD:
$ rm -rf /var/www/html/yum/CentOS/5/os/repodata
Please note that in RHEL 5 CDs there are multiple repodata directories one per channel, they can be removed as follows:
$ rm -rf /var/www/html/yum/RHEL/5/i386/os/Cluster/repodata $ rm -rf /var/www/html/yum/RHEL/5/i386/os/ClusterStorage/repodata $ rm -rf /var/www/html/yum/RHEL/5/i386/os/Server/repodata $ rm -rf /var/www/html/yum/RHEL/5/i386/os/VT/repodata
The reason why we need to remove this repodata directory and generate our own is because we have copied this tree from CD and hence the base URL for all the RPMs in the headers is stored as media:.
createrepois a bourne shell script which in turn just calls the underlying python program called /usr/share/createrepo/genpkgmetadata.py. To create a yum repository of all the packages from base operating systems we need to execute the createrepo command in the directory which we want to be turned into a repository as follows:
$ createrepo /var/www/html/yum/CentOS/5/i386/os/
The createrepo command will create all the required headers in the repodata directory. Normally we won't add any new RPM file in the base distribution, so we won't need to generate these headers again. But if for some reason we copy a new RPM file in the base repository or replace an existing RPM file for various reasons, then we need to regenerate the headers with createrepo command.
On the client machine, remove the information for existing standard distribution repositories. The various configuration for repositories will be present either in etc/yum.conf file or /etc/yum.repos.d directory. If its the later, remove all the files from the /etc/yum.repos.d directory and if its the former, delete all lines defining the default distribution repositories.
The IP address for my local yum server is 192.168.23.43.
The following is an example of CentOS yum client, change it appropriately for a Fedora Core or RHEL yum client. Usually the repositories can be mixed also, but that may in some cases lead to dependency problems.
$ sudo rm -f /etc/yum.repos.d/* $ sudo vi /etc/yum.repos.d/centos-base.repo [base] name=CentOS-$releasever - Base baseurl=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/ gpgcheck=1 gpgkey=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/RPM-GPG-KEY-CentOS-5 #released updates #[updates] #name=CentOS-$releasever - Updates #baseurl=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/updates/ #gpgcheck=1 #gpgkey=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/RPM-GPG-KEY-CentOS-5
Don't bother about the second section containing the updates repository, we have not configured it yet and will be doing it shortly. This is the reason why the second repository has been commented out.
Now to start using this new base repository, do the following:
$ sudo yum clean all $ sudo yum update
Now any package in the repository can be installed using ''yum install'' and so on.
After the base repository has been setup, it can only be used for installing new packages, but what about the security fixes/patches released by the distribution providers for existing packages? We need an updates repository for applying such packages. There is no hard and fast rule that we need to have an updates repository. So, if you prefer to have only one repository containing an updated/patched version of RPM packages then you can do that as well. All you need to do is replace the concerned RPM file in the base repository with an patched/updated RPM file and then re-run the createrepocommand to re-generate the headers. After that a yum update on the client side will update the concerned RPM.
However, its always advisable to keep the updated packages in a seperate repository for better management and not to touch the base repository after setup. This is the standard approach and this is what we are going to do next.
To start with we need a source from where we can get the updated packages. Choosing one of the mirrors of the distribution in question and continuously syncing the updates from it is a good idea.
I synchronise with my chosen mirrors for CentOS and Fedora Core as follows:
$ rsync rsync://mirrors.kernel.org/centos/5/updates/i386/RPMS/*.rpm /var/www/html/yum/CentOS/5/i386/updates/ $ rsync rsync://mirrors.kernel.org/centos/5/addons/i386/RPMS/*.rpm /var/www/html/yum/CentOS/5/i386/extras/ $ rsync rsync://mirrors.kernel.org/centos/5/centosplus/i386/RPMS/*.rpm /var/www/html/yum/CentOS/5/i386/extras/ $ rsync rsync://mirrors.kernel.org/centos/5/extras/i386/RPMS/*.rpm /var/www/html/yum/CentOS/5/i386/extras/ $ rsync rsync://mirrors.kernel.org/centos/5/fasttrack/i386/RPMS/*.rpm /var/www/html/yum/CentOS/5/i386/extras/
This rsync job can be run as a periodic cron job. The frequency depends on how critical updates are for you. Usually once a day is good enough. After initial sync we can convert this collection of RPMs into a repository by using createrepo utility.
$ createrepo /var/www/html/yum/CentOS/5/i386/updates/ $ createrepo /var/www/html/yum/CentOS/5/i386/extras/
Now we have the updates repository ready, so we can uncomment the commented out section in /etc/yum.repos.d/centos-base.repo file so that it now looks as follows:
$ sudo vi /etc/yum.repos.d/centos-base.repo [base] name=CentOS-$releasever - Base baseurl=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/ gpgcheck=1 gpgkey=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/RPM-GPG-KEY-CentOS-5 # released updates [updates] name=CentOS-$releasever - Updates baseurl=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/updates/ gpgcheck=1 gpgkey=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/RPM-GPG-KEY-CentOS-5 # Other CentOS repositories [extras] name=CentOS-$releasever - Extras baseurl=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/extras/ gpgcheck=1 gpgkey=http://192.168.23.43/yum/CentOS/$releasever/$ARCH/os/RPM-GPG-KEY-CentOS-5
In some cases we have found that the company firewall is blocking all outgoing connections and the only available connections are http and that too using a proxy server. In such cases, we have found that a program called lftp is extremely helpful. RPM packages of lftp are available from here Dag Wieers repository. After installing this we can create a cron job from the following quick script:
$ vi ~/bin/lftp_centos_repo_sync.sh #!/bin/sh export http_proxy=http://localproxy.local:8080 export HTTP_PROXY=$http_proxy cd /var/www/html/yum/CentOS/5/i386/updates lftp -c mget "http://mirrors.kernel.org/centos/5/updates/i386/RPMS/*.rpm" createrepo /var/www/html/yum/CentOS/5/i386/updates cd /var/www/html/yum/CentOS/5/i386/extras lftp -c mget "http://mirrors.kernel.org/centos/5/fasttrack/i386/RPMS/*.rpm" lftp -c mget "http://mirrors.kernel.org/centos/5/addons/i386/RPMS/*.rpm" lftp -c mget "http://mirrors.kernel.org/centos/5/centosplus/i386/RPMS/*.rpm" lftp -c mget "http://mirrors.kernel.org/centos/5/extras/i386/RPMS/*.rpm" createrepo /var/www/html/yum/CentOS/5/i386/extras $ chmod +x ~/bin/lftp_centos_repo_sync.sh
Sometimes it is desirable to add a repository in local yum configuration, but restrict it to a certain number of packages only. This can be done by adding includepkgs line in the repository config as follows:
$ cat /etc/yum.repos.d/dag-wieers.repo [dag] name=Dag-CentOS-$releasever baseurl=http://192.168.23.43/yum/RHEL/$releasever/$ARCH/dag/ gpgcheck=1 gpgkey=http://192.168.23.43/yum/RHEL/$releasever/$ARCH/dag/RPM-GPG-KEY.dag.txt includepkgs=clamav clamav-devel clamav-db unrar
This will make sure that only listed packages are downloaded/updated from the local dag repository.
Very recently I was asked to setup Nagios to monitor the Lotus Notes/Domino Servers. There were some around 500 plus servers across the globe. It was an all Windows shop and the current monitoring was being done using GSX, HP Systems Insight Manager and IBM Director. The client wanted a comprehensive solution so that they have a single monitoring interface to look at and after an initial discussion they decided to go ahead with Nagios.
This document looks at monitoring Lotus Notes/Domino servers using SNMP through Nagios. I have provided some of the required OIDs and their initial warning and critical threshold values in tabular format. There are many more interesting OIDs listed in the domino.mib file. Also I have attached the Nagios commands definition file and service definition files at the end of the document. In order to use certain checks, some plugins are required which can be downloaded from http://www.barbich.net/websvn/wsvn/nagios/nagios/plugins/check_lotus_state.pl.
Note – I recently found that the required plugins are not available on the original site anymore, so I have made my copy available with this document. You can download the scripts from the link at the bottom of the document.
To start with I asked the windows administrators to install the Lotus/Domino SNMP Agent on all servers and after that I got hold of a copy of domino.mib file which is located in C:\system32.
Next I listed all the interesting parameters from the domino.mob file and started querying a set of test servers to find out if a value is being returned or not. Following is the OID list and what each OID means. Most of these checks are only valid in the Active node. This is important to know if the Domino servers are in a HA cluster (active-standby pair). If there is only one Domino Server then these checks will apply.
| Monitoring Checks on Active Node | |||
|---|---|---|---|
| Nagios Service Check | OID | Description | Threshholds (w- warning, c-critical) |
| dead-mail | enterprises.334.72.1.1.4.1.0 | Number of dead (undeliverable) mail messages | w 80, c 100 |
| routing-failures | enterprises.334.72.1.1.4.3.0 | Total number of routing failures since the server started | w 100, c 150 |
| pending-routing | enterprises.334.72.1.1.4.6.0 | Number of mail messages waiting to be routed | w10, c 20 |
| pending-local | enterprises.334.72.1.1.4.7.0 | Number of pending mail messages awaiting local delivery | w 10, c 20 |
| average-hops | enterprises.334.72.1.1.4.10.0 | Average number of server hops for mail delivery | w 10, c 15 |
| max-mail-delivery-time | enterprises.334.72.1.1.4.12.0 | Maximum time for mail delivery in seconds | w 300, c@600 |
| router-unable-to-transfer | enterprises.334.72.1.1.4.19.0 | Number of mail messages the router was unable to transfer | w 80, c100 |
| mail-held-in-queue | enterprises.334.72.1.1.4.21.0 | Number of mail messages in message queue on hold | w 80, c 100 |
| mails-pending | enterprises.334.72.1.1.4.31.0 | Number of mail messages pending | w@80, c@100 |
| mailbox-dns-pending | enterprises.334.72.1.1.4.34.0 | Number of mail messages in MAIL.BOX waiting for DNS | w 10, c 20 |
| databases-in-cache | enterprises.334.72.1.1.10.15.0 | The number of databases currently in the cache. Administrators should monitor this number to see whether it approaches the NSF_DBCACHE_MAXENTRIES setting. If it does, this indicates the cache is under pressure. If this situation occurs frequently, the administrator should increase the setting for NSF_DBCACHE_MAXENTRIES | w 80, c 100 |
| database-cache-hits | enterprises.334.72.1.1.10.17.0 | The number of times an lnDBCacheInitialDbOpen is satisfied by finding a database in the cache. A high ‘hits-to-opens’ ratio indicates the database cache is working effectively, since most users are opening databases in the cache without having to wait for the usual time required by an initial (non-cache) open. If the ratio is low (in other words, more users are having to wait for databases not in the cache to open), the administrator can increase the NSF_DBCACHE_MAXENTRIES | w, c |
| database-cache-overcrowding | enterprises.334.72.1.1.10.21.0 | The number of times a database is not placed into the cache when it is closed because lnDBCacheCurrentEntries equals or exceeds lnDBCacheMaxEntries*1.5. This number should stay low. If it begins to rise, you should increase the NSF_DbCache_Maxentries settings | w 10, c 20 |
| replicator-status | enterprises.334.72.1.1.6.1.3.0 | Status of the Replicator task | |
| router-status | enterprises.334.72.1.1.6.1.4.0 | Status of the Router task | |
| replication-failed | enterprises.334.72.1.1.5.4.0 | Number of replications that generated an error | |
| server-availability-index | enterprises.334.72.1.1.6.3.19.0 | Current percentage index of server’s availability. Value range is 0-100. Zero (0) indicates no available resources; a value of 100 indicates server completely available | |
| Interesting OIDs to plot for Trend Analysis | |
|---|---|
| enterprises.334.72.1.1.4.2.0 | Number of messges received by router |
| enterprises.334.72.1.1.4.4.0 | Total number of mail messages routed since the server started |
| enterprises.334.72.1.1.4.5.0 | Number of messages router attempted to transfer |
| enterprises.334.72.1.1.4.8.0 | Notes server’s mail domain |
| enterprises.334.72.1.1.4.11.0 | Average size of mail messages delivered in bytes |
| enterprises.334.72.1.1.4.13.0 | Maximum number of server hops for mail delivery |
| enterprises.334.72.1.1.4.14.0 | Maximum size of mail delivered in bytes |
| enterprises.334.72.1.1.4.15.0 | Minimum time for mail delivery in seconds |
| enterprises.334.72.1.1.4.16.0 | Minimum number of server hops for mail delivery |
| enterprises.334.72.1.1.4.17.0 | Minimum size of mail delivered in bytes |
| enterprises.334.72.1.1.4.18.0 | Total mail transferred in kilobytes |
| enterprises.334.72.1.1.4.20.0 | Count of actual mail items delivered (may be different from delivered which counts individual messages) |
| enterprises.334.72.1.1.4.26.0 | Peak transfer rate |
| enterprises.334.72.1.1.4.27.0 | Peak number of messages transferred |
| enterprises.334.72.1.1.4.32.0 | Number of mail messages moved from MAIL.BOX via SMTP |
| cache cmd hit rate | enterprises.334.72.1.1.15.1.24.0 |
| cache db hit rate | enterprises.334.72.1.1.15.1.26.0 |
| hourly access denials | enterprises.334.72.1.1.11.6.0 |
| req per 5 min | enterprises.334.72.1.1.15.1.13.0 |
| unsuccesfull run | enterprises.334.72.1.1.11.9.0 |