In this post I’ll demonstrate how to
create a Cluster Aware AIX (CAA) cluster on AIX 7.1. I’ll create a simple
two-node CAA cluster. I’ll then check the cluster status and, finally, remove
the cluster.
Before I start, I make sure that both
nodes have entries in /etc/hosts on both systems.
node1[/] > grep node /etc/hosts
172.29.134.169
node1
172.29.152.160
node2
node2[/] > grep node /etc/hosts
172.29.134.169
node1
172.29.152.160
node2
I verify that cluster services are not
active on both nodes.
node1[/tmp/cg] > lscluster -i
Cluster services are not active.
node2[/] >
lscluster -i
Cluster services are not active.
I’ll now create a cluster named
mycluster. My CAA repository disk is named hdisk1.
node1[/] > mkcluster -n mycluster -m
node1,node2 -r hdisk1
My two node CAA cluster is now
operational.
node1[/] > lscluster -c
Cluster query for cluster mycluster returns:
Cluster uuid:
2a476f06-be50-11e0-9781-66da93d16b18
Number of nodes in cluster = 2
Cluster id for node node2 is 1
Primary IP address for node node2 is 172.29.152.160
Cluster id for node node1 is 2
Primary IP address for node node1 is 172.29.134.169
Number of disks in cluster = 0
Multicast address for cluster is
228.29.134.169
I can immediately execute commands on
both nodes using the clcmd utility.
node1[/] > clcmd uptime
-------------------------------
NODE node1
-------------------------------
11:16PM up 1:16,
1 user, load average: 1.10, 1.18,
1.36
-------------------------------
NODE node2
-------------------------------
11:16PM up 45 mins, 1 user,
load average: 0.97, 0.75, 0.64
Prior to cluster creation, the disks
on my system were named as follows:
node1[/] > lspv
hdisk1
00f6048892b78f9f
None
hdisk0
00f6050a2cd79ef8
rootvg active
After cluster creation, hdisk1 is
renamed to caa_private0 and is assigned to the CAA repository volume group
named caavg_private.
node1[/] > lspv
caa_private0
00f6048892b78f9f
caavg_private active
hdisk0
00f6050a2cd79ef8 rootvg active
node1[/] > lsvg caavg_private
VOLUME GROUP: caavg_private VG IDENTIFIER: 00f6048800004c0000000131930c1af3
VG STATE: active PP SIZE: 64 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 799 (51136 megabytes)
MAX LVs: 256 FREE PPs: 784 (50176 megabytes)
LVs: 6 USED PPs: 15 (960 megabytes)
OPEN LVs: 2 QUORUM: 2 (Enabled)
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: yes
MAX PPs per VG: 32512
MAX PPs per PV: 1016 MAX PVs: 32
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable
PV RESTRICTION: none
node1[/] > lsvg -l caavg_private
caavg_private:
LV NAME TYPE LPs
PPs PVs LV STATE
MOUNT POINT
caalv_private1 boot
1 1 1
closed/syncd N/A
caalv_private2 boot
1 1 1
closed/syncd N/A
caalv_private3 boot
4 4 1
open/syncd N/A
fslv00
jfs2 4 4
1 open/syncd /clrepos_private1
fslv01 jfs2 4
4 1 closed/syncd /clrepos_private2
powerha_crlv boot 1
1 1 closed/syncd N/A
node1[/tmp/cg] > df -g
Filesystem
GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 0.25 0.07
74% 10118 37% /
/dev/hd2 2.50 0.19
93% 45192 48% /usr
/dev/hd9var 0.38 0.02
95% 5995 47% /var
/dev/hd3 0.50 0.38
24% 187 1% /tmp
/dev/hd1 0.12 0.12
1% 11 1% /home
/dev/hd11admin 0.12
0.12 1% 5
1% /admin
/proc - -
- - -
/proc
/dev/hd10opt 0.38
0.13 65% 8021
21% /opt
/dev/livedump 0.25
0.25 1% 4
1% /var/adm/ras/livedump
/aha - -
- 37 1% /aha
/dev/fslv00
0.25 0.25 2%
15 1% /clrepos_private1
Several services are started during
cluster creation. In particular you will notice the following subsystems active
(lssrc –a output).
cld caa 7340104 active
solidhac caa 8912986 active
solid caa 8519794 active
clconfd caa 9306188 active
cthags cthags 9830450 active
ctrmc rsct 3342554 active
IBM.DRM rsct_rm 8716462 active
IBM.ServiceRM rsct_rm 9240724 active
IBM.StorageRM rsct_rm 8061146 active
IBM.MgmtDomainRM rsct_rm 8454200 active
node1[/tmp/cg] > lssrc -g caa
Subsystem Group PID Status
clcomd caa 6553810 active
cld caa 6619252 active
solidhac caa 8650856 active
solid caa 8781824 active
clconfd caa 10092598 active
Now I will remove the cluster.
node1[/] > rmcluster -n mycluster
rmcluster: Removed cluster shared disks are
automatically renamed to names such
as hdisk10, [hdisk11, ...] on all cluster nodes. However, this cannot
take place while a disk is busy or on a node which is down or not
reachable. If any disks cannot be
renamed now, you must manually
rename them by removing them from the ODM database and then running
the cfgmgr command to recreate them with default names. For example:
rmdev -l cldisk1 -d
rmdev -l cldisk2 -d
cfgmgr
CAA cluster services are no longer
active.
node1[/] > lscluster -c
Cluster services are not active.
And caa_private0 is renamed back to
hdisk1 and the caavg_private volume group is removed also.
node1[/] > lspv
hdisk1
00f6048892b78f9f
None
hdisk0
00f6050a2cd79ef8
rootvg active
node1[/] > df
Filesystem
512-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 524288 139136
74% 10097 37% /
/dev/hd2 5242880 405088
93% 45192 48% /usr
/dev/hd9var 786432 88040
89% 5986 35% /var
/dev/hd3 1048576 806528
24% 186 1% /tmp
/dev/hd1 262144 261352
1% 11 1% /home
/dev/hd11admin 262144
261416 1% 5
1% /admin
/proc - -
- - -
/proc
/dev/hd10opt 786432
279912 65% 8021
21% /opt
/dev/livedump 524288
523552 1% 4
1% /var/adm/ras/livedump
/aha - -
- 32 1% /aha
node1[/] >
Tags:
caa
aix
caavg_private
gibson
cluster
creation
creating
chris
aware
mkcluster
7.1
lscluster