Using clstat from a
non-cluster node
clstat is a binary that is contained in the PowerHA cluster.es.client.rte LPP fileset. It uses the clinfoES daemon as an API (via SNMP), to query and report on, the status of a PowerHA cluster. The clinfoES daemon must be up and running before starting clstat.
You can run clstat from any node in a cluster to determine the status of the nodes and the health of the cluster. It is also possible to run clstat from another AIX system, that is not a member of the cluster. This aids in providing another method for monitoring your PowerHA cluster(s), outside of the actual cluster. Having another system monitor your cluster, in isolation, can be beneficial when problems exist with any of the nodes or the entire cluster, as you are not relying on a healthy node to run clstat.
To use clstat from a non-cluster node, you need to perform the following steps:
1. Install the PowerHA client file set (plus requisites) on another AIX system, that is not part of the HA cluster.
2. Populate /usr/es/sbin/cluster/etc/clhosts with the (persistent/boot) IP labels of each cluster node.
3. Start the clinfoES daemon (startsrc –s clinfoES).
4. Run clstat -a.
The clstat utility can also be used to monitor more than one cluster. If clstat is invoked with the -i option, it reports on all clusters it can reach.
In the example below, I have a three-node cluster that I would like to monitor (with clstat) from another, non-cluster AIX system, called yoda.
The AIX host, yoda, is running AIX 7.2 TL5 SP1. The PowerHA cluster nodes are all running PowerHA 7.2.5 SP1.
root@yoda
/ # oslevel -s
7200-05-01-2038
I installed the cluster.es.client.rte from the PowerHA v7.2.5 base installation media (ISO file, which I downloaded from the IBM ESS website). This fileset will automatically “pull in” and install, some other requisite filesets i.e. cluster.es.migcheck, cluster.es.client.utils and cluster.es.client.lib.
root@yoda
/tmp/cg # loopmount -i POWERHA_SYSTEMMIRROR_V7.2.5_STD.iso -m /mnt -o "-V
cdrfs -o ro"
root@yoda
/tmp/cg # cd mnt
root@yoda
/mnt # ls -ltr
total
16
drwxrwxr-x 3 4000
4000 2048 Nov 24
2020 usr
drwxrwxr-x 2 4000
4000 2048 Nov 24
2020 smui_server
drwxrwxr-x 3 4000
4000 2048 Nov 24 2020 installp
-rw-rw-r-- 1 4000
4000 42 Nov 24
2020 .Version
root@yoda
/mnt # smit installp
...blah...
File:
cluster.es.client.rte
+-----------------------------------------------------------------------------+
Pre-installation
Verification...
+-----------------------------------------------------------------------------+
Verifying
selections...done
Verifying
requisites...done
Results...
SUCCESSES
---------
Filesets listed in this section passed
pre-installation verification
and will be installed.
-- Filesets are listed in the order in
which they will be installed.
-- The reason for installing each fileset
is indicated with a keyword
in parentheses and explained by a
"Success Key" following this list.
cluster.es.migcheck 7.2.5.0 (Requisite)
PowerHA SystemMirror Migration support
cluster.es.client.rte 7.2.5.0 (Selected)
PowerHA SystemMirror Client Runtime
cluster.es.client.utils 7.2.5.0 (Requisite)
PowerHA SystemMirror Client Utilities
cluster.es.client.lib 7.2.5.0 (Requisite)
PowerHA SystemMirror Client Libraries
….
+-----------------------------------------------------------------------------+
Summaries:
+-----------------------------------------------------------------------------+
Installation
Summary
--------------------
Name Level Part Event Result
-------------------------------------------------------------------------------
cluster.es.migcheck 7.2.5.0 USR APPLY SUCCESS
cluster.es.migcheck 7.2.5.0 ROOT APPLY SUCCESS
cluster.es.client.rte 7.2.5.0 USR APPLY SUCCESS
cluster.es.client.utils 7.2.5.0 USR APPLY SUCCESS
cluster.es.client.lib 7.2.5.0 USR APPLY SUCCESS
cluster.es.client.rte 7.2.5.0 ROOT APPLY SUCCESS
cluster.es.client.lib 7.2.5.0 ROOT APPLY SUCCESS
File
/etc/inittab has been modified.
File
/etc/services has been modified.
One
or more of the files listed in /etc/check_config.files have changed.
See /var/adm/ras/config.diff for
details.
root@yoda
/ # lslpp -L cluster*
Fileset Level State
Type Description (Uninstaller)
----------------------------------------------------------------------------
cluster.es.client.lib 7.2.5.0
C F PowerHA SystemMirror Client
Libraries
cluster.es.client.rte 7.2.5.0
C F PowerHA SystemMirror Client
Runtime
cluster.es.client.utils 7.2.5.0
C F PowerHA SystemMirror Client
Utilities
cluster.es.migcheck 7.2.5.0 C
F PowerHA SystemMirror
Migration
support
After the fileset was installed, we added the (boot) IP addresses of each of the cluster nodes to the /usr/es/sbin/cluster/etc/clhosts file on yoda.
root@yoda
/usr/es/sbin/cluster/etc # tail clhosts
# process locally, it will continue
indefinitely to attempt communication
# periodically ONLY to the local IP
address. For this reason, it is
# adviseable for the user to replace the
loopback address with all HACMP for
# AIX boot-time and/or service IP labels/addresses
accessible through logical
# connections with this node, just as on an
HACMP for AIX client node. The
# loopback address is provided only as a
convenience.
10.1.1.161
10.1.1.208
10.1.1.149
Next, we started the clinfoES daemon.
root@yoda
/ # startsrc -s clinfoES
0513-059
The clinfoES Subsystem has been started. Subsystem PID is 10682714.
root@yoda
/ # lssrc -s clinfoES
Subsystem Group PID Status
clinfoES cluster 10682714 active
We were then able to monitor the cluster status, from yoda, with clstat.
root@yoda
/ # clstat -a
clstat - HACMP Cluster Status
Monitor
-------------------------------------
Cluster:
cgaix_cluster (1854696660)
Mon
Aug 23 19:52:12 CDT 2021
State: UP Nodes: 3
SubState: STABLE
Node: aixcg2 State: UP
Interface: aixcg2 (0) Address: 10.1.1.208
State: UP
Node: aixcg1 State: UP
Interface: aixcg1 (0) Address: 10.1.1.161
State: UP
Interface: hasvc (0) Address: 10.1.1.238
State: UP
Resource Group: RG1 State: On line
Node: aixcg3 State: UP
Interface: aixcg3 (0) Address: 10.1.1.149
State: UP
To ensure that clinfoES was started automatically at boot, we placed a new entry in the /etc/inittab to start the service.
root@yoda
/ # mkitab "clinfoES:2:once:/usr/bin/startsrc -s clinfoES >/dev/console
2>&1"
root@yoda
/ # lsitab -a | tail -1
clinfoES:2:once:/usr/bin/startsrc
-s clinfoES >/dev/console 2>&1
root@yoda
/ # alog -ot console | grep clinfoES
0 Mon Aug 23 19:28:43 CDT 2021
0513-059 The clinfoES Subsystem has been started. Subsystem PID is 9896250.