Flicking through the latest AIX enhancements.After reading about
the latest AIX updates here: http
Here’s what I found so far.
There appears to be some new integration between NIM and the VIOS. The nim command now has an updateios option e.g. nim –o updateios. So you can update your VIO servers from NIM now. This is nice.
On my lab NIM master I checked the nim man page and found the following new information:
NIM [/] # oslevel -s 6100-07-02-1150
NIM [/] # man nim ... updateios Performs software customization and maintenance on a virtual input output server (VIOS) management server that is of the vios or ivm type.
updateios
1 To install fixes or to update VIOS with the vioserver1 NIM object name to the latest maintenance level, type:
nim
-o updateios
-a lpp_
The
updates are stored in lpp_source and lpp_source1 files. Note: The updateios
operation runs a preview during installation. Running the updateios operation
from NIM runs a preview unless the preview flag is set to no. During the
installation, you must run a preview when using the updateios operation with
upda
2 To reject fixes for a VIOS with the vioserver1 NIM object name, type:
nim
-o updateios -a upda
3 To clean up partially installed updates for a VIOS with the vioserver1 NIM object name, type:
nim
-o updateios -a upda
4 To commit updates for a VIOS with the vioserver1 NIM object name, type:
nim
-o updateios -a upda
5 To remove a specific update such as update1 for a VIOS with the vioserver1 NIM object name, type:
nim
-o updateios -a upda
6 To remove updates for a VIOS with the vioserver1 NIM object name by using an installp_bundle bundle1, where bundle1 contains the updates to be removed, type:
nim
-o updateios -a upda
===
There’s also mention of a new resource type, specifically for VIOS mksysbs. This resource type is called ios_mksysb: ... ios_mksysb Represents a backup image taken from a VIOS management server that is of the vios or ivm type.
26 To
define a ios_mksysb resource such as ios_mksysb1, and create the ios_mksysb
image of the vios client as vios1, during the resource definition where the
image is located in /exp
nim -o define -t ios_mksysb -a server=master \ -a
loca -a mk_image=yes ios_mksysb1
This is all starting to come together now, since the introduction of the new “management” object class, vios, with AIX 6.1 TL3.
# smit nim
Manage Control Objects Define a Management Object
Next I thought I’d take a look at the TCP Fast Loopback option. This new option should help to reduce TCP/IP (CPU) overhead when two (TCP) communication end points reside in the same LPAR. This could be useful where you have an LPAR running a database and application in the same LPAR e.g. SAP and Oracle in the same LPAR. It can also be used when two or more WPARs, in the same LPAR need to communicate with each other over TCP/IP.
I turned on this new feature on my AIX 7.1 LPAR.
AIX7[/] # oslevel -s 7100-01-02-1150
AIX7[/] # netstat -p tcp | grep fastpath 0 fastpath loopback connection 0 fastpath loopback sent packet (0 byte) 0 fastpath loopback received packet (0 byte)
AIX7[/] # no -p -o tcp_fastlo=1 Setting tcp_fastlo to 1 Setting tcp_fastlo to 1 in nextboot file Change to tunable tcp_fastlo, will only be effective for future connections
AIX7[/]
# no -p -o tcp_ Setting tcp_fastlo to 1 Setting tcp_fastlo to 1 in nextboot file Change to tunable tcp_fastlo, will only be effective for future connections
AIX7[/] # no -a | grep tcp_fast tcp_fastlo = 1 tcp_
Initially I did not see any traffic via the fastpath.
AIX7[/] # netstat -s -p tcp | grep fastpath 0 fastpath loopback connection 0 fastpath loopback sent packet (0 byte) 0 fastpath loopback received packet (0 byte)
So I created two WPARs in the same LPAR and started transferring files between them via FTP.
AIX7[/] # lswpar -N Name Interface Address(6) Mask/Prefix Broadcast ---- wpar1 en0 172.29.152.235 255.255.192.0 172.29.191.255 wpar2 en0 172.29.152.236 255.255.192.0 172.29.191.255
And sure enough I started to see some traffic.
AIX7[/] # netstat -s -p tcp | grep fastpath 7072 fastpath loopback connection 171102 fastpath loopback sent packet (5374011130 byte) 171102 fastpath loopback received packet (5374011130 byte)
Next stop: JFS2 Remount Support. Apparently you can now change certain mount options dynamically. These particular options can influence (among other things) filesystem caching.
NIM [/] # oslevel -s 6100-07-02-1150
So I start with a standard JFS2 filesystem, without any additional mount options.
NIM [/] # mount | grep cg /dev/cglv /cg jfs2 Jan 18 21:14 rw,log=/dev/hd8
Then I dynamically remounted it with the rbr option. This option will prevent user data pages from being cached after a file is read from this filesystem.
NIM [/] # mount -o remount,rbr /cg NIM [/] # mount | grep cg /dev/cglv /cg jfs2 Jan 18 21:14 rw,rbr,log=/dev/hd8
Still can’t dynamically mount a filesystem with CIO however. But that’s OK.
NIM [/] # mount -o remount,cio /cg mount: cio is not valid with the remount option.
According to the
presentation, there are several options that can now be changed dynamically
e.g. atim
By the way, just to make sure, I tried changing the same mount option, dynamically, on an AIX 6.1 TL6 system, and it failed as expected. I’d have to umount and mount the filesystem to do this on TL6 (or lower).
# oslevel -s 6100-06-04-1112
# mount -o remount,rbr /cg mount: remo
OK, let’s look at the new LVM Infinite Retry Capability. Designed to improve system availability, by allowing LVM to recover from transient failures of storage devices. Sounds interesting!
AIX7[/] # oslevel -s 7100-01-02-1150
The man page for mkvg states the following:
-O y / n Enables the infinite retry option of the logical volume. n The infinite retry option of the logical volume is not enabled. The failing I/O of the logical volume is not retried. This is the default value. y The infinite retry option of the logical volume is enabled. The failed I/O request is retried until it is successful.
I think “logical volume” should be “volume group”. But anyway, I get the idea.
So let’s create a new VG with infinite retry enabled.
AIX7[/] # mkvg -O y -S -y cgvg hdisk6 cgvg AIX7[/] # lsvg cgvg VOLUME
GROUP: cgvg VG IDENTIFIER: 00f6 VG
STATE: acti VG
PERMISSION: read MAX
LVs: 256 LVs: 0 OPEN
LVs: 0 TOTAL
PVs: 1 STALE
PVs: 0 ACTIVE
PVs: 1 MAX
PPs per VG: 3276 LTG
size (Dynamic): 128 kilo HOT
SPARE: n MIRROR POOL STRICT: off PV
RESTRICTION: non AIX7[/] #
Now, let’s disable it.
AIX7[/] # chvg -On cgvg AIX7[/] # lsvg cgvg VOLUME
GROUP: cgv VG
STATE: acti VG
PERMISSION: read MAX
LVs: 25 LVs: 0 OPEN
LVs: 0 TOTAL
PVs: 1 STALE
PVs: 0 ACTIVE
PVs: 1 MAX
PPs per VG: 3276 LTG size (Dynamic): 128 kilobyte(s) AUTO SYNC: no HOT
SPARE: n MIRROR POOL STRICT: off PV
RESTRICTION: non AIX7[/] #
The man page for mklv states the following:
-O y / n Enables the infinite retry option of the logical volume. n The infinite retry option of the logical volume is not enabled. The failing I/O of the logical volume is not retried. This is the default value. y The infinite retry option of the logical volume is enabled. The failed I/O request is retried until it is successful.
Standby, creating a logical volume with infinite retry enabled.
AIX7[/] # mklv -tjfs2 -Oy -y mylv cgvg 100 mylv AIX7[/] # lslv mylv LOGICAL
VOLUME: myl LV
IDENTIFIER:
00f6 VG
STATE: acti TYPE: jfs MAX
LPs: 512 COPIES: 1 LPs: 10 STALE
PPs: 0 INTER-POLICY: mini INTRA-POLICY: midd MOUNT
POINT: N/ DEVICE
UID: 0 DEVICE PERMISSIONS: 432 MIRROR WRITE CONSISTENCY: on/ACTIVE EACH LP COPY ON A SEPARATE PV ?: yes Serialize IO ?: NO INFINITE RETRY: yes DEVICESUBTYPE: DS_LVZ COPY 1 MIRROR POOL: None COPY 2 MIRROR POOL: None COPY 3 MIRROR POOL: None AIX7[/] #
Can I disable it, once I have a mounted filesystem on the LV? No. You must unmount the filesystem first. OK, no problem.
AIX7[/] # chlv -On mylv 0516-012 lchangelv: Logical volume must be closed. If the logical volume contains a filesystem, the umount command will close the LV device. 0516-704 chlv: Unable to change logical volume mylv.
AIX7[/] # umount /myfs AIX7[/] # chlv -On mylv AIX7[/] # mount /myfs AIX7[/] # lslv mylv LOGICAL
VOLUME: myl LV
IDENTIFIER:
00f6 VG
STATE: acti TYPE: jfs MAX
LPs: 51 COPIES: 1 LPs: 10 STALE
PPs: 0 INTER-POLICY: mini INTRA-POLICY: middle UPPER BOUND: 1024 MOUNT
POINT: /myf DEVICE
UID: 0 DEVICE PERMISSIONS: 432 MIRROR WRITE CONSISTENCY: on/ACTIVE EACH LP COPY ON A SEPARATE PV ?: yes Serialize IO ?: NO INFINITE RETRY: no DEVICESUBTYPE: DS_LVZ COPY 1 MIRROR POOL: None COPY 2 MIRROR POOL: None COPY 3 MIRROR POOL: None
And last, but not least, let’s take a brief look at Active System Optimiser (ASO). To be honest, I’m still not entirely sure how ASO works. But I have no doubt that more information will be available from IBM soon. According to the presentation material, ASO can “increase system performance by autonomously tuning system configuration”. Wow, cool! It focuses on optimizing cache and memory affinity. Hmmm, interesting. How the heck does it do that!? Only works with POWER7 and AIX 7.1.
So can I enable this on my p7 LPAR? Let’s give it a try!
AIX7[/] # oslevel -s 7100-01-02-1150
AIX7[/var/log/aso] # asoo -a aso_active = 0
AIX7[/var/log/aso] # asoo -p -o aso_active=1 Setting aso_active to 1 in nextboot file Setting aso_active to 1
AIX7[/var/log/aso] # asoo -a aso_active = 1
Is the aso daemon running already? Nope.
AIX7[/] # ps -ef | grep aso AIX7[/] # lssrc -a | grep aso as
Can I start it now? Nope.
AIX7[/var/log] # startsrc -s aso 0513-059 The aso Subsystem has been started. Subsystem PID is 7209122.
AIX7[/var/log] # lssrc -a | grep aso as
Let’s check the ASO log file. Oh no, my VLP LPAR doesn’t have enough CPU entitlement for ASO to start. Oh well.
AIX7[/var/log] # cd /var/log/aso AIX7[/var/log/aso] # ls -ltr total 16 -rw-r--r-- 1 root system 1143 Jan 18 22:14 aso_process.log -rw-r--r-- 1 root system 1143 Jan 18 22:14 aso.log
AIX7[/var/log/aso] # cat aso.log Jan 18 22:14:00 l488pp011_pub aso:notice aso[7209122]: /var/run/aso locked with pid 7209122 Jan 18 22:14:00 l488pp011_pub aso:notice aso[7209122]: [STOP] Maximum system entitlement is 0.5 CPUs, but must be at least 2.0 CPUs for ASO to operate. Stopping. Jan 18 22:14:00 l488pp011_pub aso:notice aso[7209122]: [STOP] Unsupported partition configuration detected; ASO will not run. |