Using clstat from a non-cluster node


clstat is a binary that is contained in the PowerHA LPP fileset. It uses the clinfoES daemon as an API (via SNMP), to query and report on, the status of a PowerHA cluster. The clinfoES daemon must be up and running before starting clstat.


You can run clstat from any node in a cluster to determine the status of the nodes and the health of the cluster. It is also possible to run clstat from another AIX system, that is not a member of the cluster. This aids in providing another method for monitoring your PowerHA cluster(s), outside of the actual cluster. Having another system monitor your cluster, in isolation, can be beneficial when problems exist with any of the nodes or the entire cluster, as you are not relying on a healthy node to run clstat.


To use clstat from a non-cluster node, you need to perform the following steps:


1.     Install the PowerHA client file set (plus requisites) on another AIX system, that is not part of the HA cluster.

2.     Populate /usr/es/sbin/cluster/etc/clhosts with the (persistent/boot) IP labels of each cluster node.

3.     Start the clinfoES daemon (startsrc –s clinfoES).

4.     Run clstat -a.


The clstat utility can also be used to monitor more than one cluster. If clstat is invoked with the -i option, it reports on all clusters it can reach.


In the example below, I have a three-node cluster that I would like to monitor (with clstat) from another, non-cluster AIX system, called yoda.


The AIX host, yoda, is running AIX 7.2 TL5 SP1. The PowerHA cluster nodes are all running PowerHA 7.2.5 SP1.


root@yoda / # oslevel -s



I installed the from the PowerHA v7.2.5 base installation media (ISO file, which I downloaded from the IBM ESS website). This fileset will automatically “pull in” and install, some other requisite filesets i.e., and


root@yoda /tmp/cg # loopmount -i POWERHA_SYSTEMMIRROR_V7.2.5_STD.iso -m /mnt -o "-V cdrfs -o ro"

root@yoda /tmp/cg # cd mnt

root@yoda /mnt # ls -ltr

total 16

drwxrwxr-x    3 4000     4000           2048 Nov 24 2020  usr

drwxrwxr-x    2 4000     4000           2048 Nov 24 2020  smui_server

drwxrwxr-x    3 4000     4000           2048 Nov 24 2020  installp

-rw-rw-r--    1 4000     4000             42 Nov 24 2020  .Version

root@yoda /mnt # smit installp







                    Pre-installation Verification...


Verifying selections...done

Verifying requisites...done





  Filesets listed in this section passed pre-installation verification

  and will be installed.

    -- Filesets are listed in the order in which they will be installed.

    -- The reason for installing each fileset is indicated with a keyword

       in parentheses and explained by a "Success Key" following this list. (Requisite)

    PowerHA SystemMirror Migration support (Selected)

    PowerHA SystemMirror Client Runtime (Requisite)

    PowerHA SystemMirror Client Utilities (Requisite)

    PowerHA SystemMirror Client Libraries






Installation Summary


Name                        Level           Part        Event       Result

-------------------------------------------------------------------------------         USR         APPLY       SUCCESS         ROOT        APPLY       SUCCESS         USR         APPLY       SUCCESS         USR         APPLY       SUCCESS         USR         APPLY       SUCCESS         ROOT        APPLY       SUCCESS         ROOT        APPLY       SUCCESS


File /etc/inittab has been modified.

File /etc/services has been modified.


One or more of the files listed in /etc/check_config.files have changed.

        See /var/adm/ras/config.diff for details.


root@yoda / # lslpp -L cluster*

  Fileset                      Level  State  Type  Description (Uninstaller)

  ----------------------------------------------------------------------------    C     F    PowerHA SystemMirror Client

                                                   Libraries    C     F    PowerHA SystemMirror Client

                                                   Runtime    C     F    PowerHA SystemMirror Client

                                                   Utilities    C     F    PowerHA SystemMirror Migration



After the fileset was installed, we added the (boot) IP addresses of each of the cluster nodes to the /usr/es/sbin/cluster/etc/clhosts file on yoda.


root@yoda /usr/es/sbin/cluster/etc # tail clhosts

#  process locally, it will continue indefinitely to attempt communication

#  periodically ONLY to the local IP address.  For this reason, it is

#  adviseable for the user to replace the loopback address with all HACMP for

#  AIX boot-time and/or service IP labels/addresses accessible through logical

#  connections with this node, just as on an HACMP for AIX client node.  The

#  loopback address is provided only as a convenience.


Next, we started the clinfoES daemon.


root@yoda / # startsrc -s clinfoES

0513-059 The clinfoES Subsystem has been started. Subsystem PID is 10682714.


root@yoda / # lssrc -s clinfoES

Subsystem         Group            PID          Status

 clinfoES         cluster          10682714     active


We were then able to monitor the cluster status, from yoda, with clstat.


root@yoda / # clstat -a


               clstat - HACMP Cluster Status Monitor



Cluster: cgaix_cluster  (1854696660)

Mon Aug 23 19:52:12 CDT 2021

                State: UP               Nodes: 3

                SubState: STABLE



        Node: aixcg2            State: UP

           Interface: aixcg2 (0)                Address:

                                                State:   UP


        Node: aixcg1             State: UP

           Interface: aixcg1 (0)                Address:

                                                State:   UP

           Interface: hasvc (0)                 Address:

                                                State:   UP

           Resource Group: RG1                  State:  On line


        Node: aixcg3            State: UP

           Interface: aixcg3 (0)                Address:

                                                State:   UP


To ensure that clinfoES was started automatically at boot, we placed a new entry in the /etc/inittab to start the service.


root@yoda / # mkitab "clinfoES:2:once:/usr/bin/startsrc -s clinfoES >/dev/console 2>&1"

root@yoda / # lsitab -a | tail -1

clinfoES:2:once:/usr/bin/startsrc -s clinfoES >/dev/console 2>&1


root@yoda / # alog -ot console | grep clinfoES

         0 Mon Aug 23 19:28:43 CDT 2021 0513-059 The clinfoES Subsystem has been started. Subsystem PID is 9896250.