Archived | Migrating to AIX 7.1 with nimadm

Archived content

Archived date: 2019-06-24

This content is no longer being updated or maintained. The content is provided “as is.” Given the rapid evolution of technology, some content, steps, or illustrations may have changed.

Why should I migrate?

Before I discuss how to migrate to a newer version of the IBM AIX® operating system, let’s review why you should consider migrating at all. One of the most important reasons to migrate is support. If you are running an older version of AIX, such as 5.3 or earlier, you should already be aware that these versions are no longer supported by IBM. If problems arise on an older version of AIX you are most likely going to be on your own. IBM support will no longer be able to help you.

AIX 5.3 officially went out of support on April 30th 2012. IBM is offering extended support for a limited time for a fee. This will help some customers in the interim while they migrate their systems to AIX 7.1 or 6.1. If you have IBM POWER7® hardware, you might want to consider AIX 5.2 and 5.3 Workload Partitions (WPARs). This will allow you to continue running your legacy applications on AIX 5.2 or 5.3, however they will run within an AIX 7.1 WPAR. These special systems are known as versioned WPARs and are only supported on POWER7 hardware. You can find more information on versioned WPARs.

Of course there are other pressing reasons to migrate as well. Newer releases of AIX include a multitude of new enhancements, improvements, features and performance boosts. I encourage you to read the >AIX 7.1 and 6.1 Differences Guides (IBM Redbooks® publications) to find out more – see the Resources section .

Migrating to AIX 7.1 with nimadm

I’ve discussed using nimadm to migrate to AIX 6.1 in the past. In this article I’ll briefly cover that same process but this time migrating to AIX 7.1. Admittedly the steps are almost identical. So I’ll refer you to my 2010 article as a starting point for migrating to 7.1. In this article I’ll provide a brief guide to migrating both AIX 5.3 and 6.1 systems to 7.1. I’ll also offer some general advice and tips.

The nimadm utility offers several advantages over a conventional migration. For example, a system administrator can use nimadm to create a copy of a NIM client’s rootvg (on a spare disk on the client, similar to a standard alternate disk installation alt_disk_install) and migrate the disk to a newer version or release of AIX. All of this can be done without disruption to the client (there is no outage required to perform the migration). After the migration is finished, the only downtime required will be a scheduled reboot of the system.

Another advantage is that the actual migration process occurs on the NIM master, taking the load off the client client logical partition (LPAR). This reduces the processing overhead on the LPAR and minimizes the performance impact to the running applications.

For customers with a large number of AIX systems, it is also important to note that the nimadm tool supports migrating several clients at once.

Just as I did in my previous article I’ll assume you already have a NIM master in your environment. And I’m going to assume that your NIM master is already running AIX 7.1 with the latest latest technology level (TL) and and service pack (SP) applied. If not, I recommend you refer to my previous article and the associated Resources section.

My NIM master is running AIX 7.1 TL1 SP4.

# oslevel -s
7100-01-04-1216
# lslpp -l bos.sysmgt.nim.master
  Fileset                      Level  State      Description
  ----------------------------------------------------------------------------
Path: /usr/lib/objrepos
  bos.sysmgt.nim.master     7.1.1.15  APPLIED    Network Install Manager -                                                 Master Tools

I created new lpp_source and SPOT NIM resources for AIX 7.1 TL1 SP4.

# lsnim -t lpp_source
lpp_sourceaix710104     resources       lpp_source
# lsnim -t  spot
spotaix710104     resources       spot
# lsnim -l lpp_sourceaix710104
lpp_sourceaixaix710104:
   class       = resources
   type        = lpp_source
   arch        = power
   Rstate      = ready for use
   prev_state  = unavailable for use
   location    = /export/lpp_source/lpp_sourceaix710104
   simages     = yes
   alloc_count = 0
   server      = master
# lsnim -l spotaix710104
spotaix7101014:
   class         = resources
   type          = spot
   plat_defined  = chrp
   arch          = power
   Rstate        = ready for use
   prev_state    = verification is being performed
   location      = /export/spot/spotaix7101014/usr
   version       = 7
   release       = 1
   mod           = 1
   oslevel_r     = 7100-01
   alloc_count   = 2
   server        = master
   if_supported  = chrp.64 ent
   Rstate_result = success

To create these resources, I downloaded AIX 7.1 ISO images from the IBM Entitled Software support website. I placed these images into a temporary directory and then used the loopmount command to temporarily mount them.

# ls -ltr
total 11301248
drwxr-xr-x    2 root     system          256 May 14 15:47 lost+found
-rw-r--r--    1 root     system   3361046528 May 14 16:22 NIMAIX71DVD1.iso
-rw-r--r--    1 root     system   2425192448 May 14 16:23 NIMAIX71DVD2.iso
# loopmount -i NIMAIX71DVD1.iso -o "-V cdrfs -o ro" -m /mnt/dvd1
# loopmount -i NIMAIX71DVD2.iso -o "-V cdrfs -o ro" -m /mnt/dvd2
# df | grep loop
/dev/loop0       6563484         0  100%  1640871   100% /mnt/dvd1
/dev/loop1       4735648         0  100%  1183912   100% /mnt/dvd2
# ls -ltr /mnt/dvd1 /mnt/dvd2
/mnt/dvd1:
total 84
drwxr-xr-x    3 4000     4000           2048 Sep 03 2010  ppc
-rw-r--r--    1 4000     4000            819 Sep 03 2010  README.aix
drwxr-xr-x    2 4000     4000           2048 Sep 03 2010  7100-00
drwxr-xr-x    3 4000     4000           2048 Sep 03 2010  root
-rw-r--r--    1 4000     4000          15081 Sep 03 2010  image.data
-rw-r--r--    1 4000     4000           6252 Sep 03 2010  bosinst.data
-rw-r--r--    1 4000     4000             16 Sep 03 2010  OSLEVEL
drwxrwxr-x    4 4000     4000           2048 Sep 03 2010  RPMS
drwxr-xr-x   11 4000     4000           2048 Sep 03 2010  usr
drwxr-xr-x    4 4000     4000           2048 Sep 03 2010  installp
-rw-rw-r--    1 4000     4000             42 Sep 03 2010  .Version
/mnt/dvd2:
total 16
drwxr-xr-x    3 4000     4000           2048 Sep 03 2010  ismp
drwxrwxr-x    3 4000     4000           2048 Sep 03 2010  usr
drwxrwxr-x    4 4000     4000           2048 Sep 03 2010  installp
-rw-rw-r--    1 4000     4000             42 Sep 03 2010  .Version

After they are mounted, I used smitty bffcreate to copy the contents of these images to my new AIX 7.1 lpp_source directory create (/export/lpp_source/lpp_sourceaix710104).

Copy Software to Hard Disk for Future Installation

Type or select values in entry fields.
Press Enter AFTER making all desired changes.
                                                     [Entry Fields]
* INPUT device / directory for software               /mnt/dvd1
* SOFTWARE package to copy                           [all]        +
* DIRECTORY for storing software package      [/export/lpp_source/lpp_sourceaixaix710104]
  DIRECTORY for temporary storage during copying     [/tmp]
  EXTEND file systems if space needed?                yes         +
  Process multiple volumes?                           yes         +

After the base filesets had been copied over, I defined the new lpp_source and SPOT within NIM. Then I downloaded the latest TL and SP for AIX 7.1 and updated the lpp_source and SPOT.

My NIM clients are running AIX 5.3 and AIX 6.1. We will migrate both of these systems to AIX 7.1 using nimadm. We will review the migration process for each system and check that the systems are appropriately tuned after the migration.

root@lparlparaix53[/] > oslevel -s
5300-12-04-1119
root@lparlparaix61[/] > oslevel -s
6100-06-04-1112

I have NIM client definitions for both systems already in place on my NIM master.

# lsnim -t standalone
lparlparaix53     machines       standalone
lparlparaix61     machines       standalone
# lsnim -l lparaix53
lparaix53:
   class          = machines
   type           = standalone
   connect        = nimsh
   platform       = chrp
   netboot_kernel = 64
   if1            = 172_29_154 lparaix53 0
   cable_type1    = N/A
   Cstate         = ready for a NIM operation
   prev_state     = not running
   Mstate         = currently running
   cpuid          = 00CDB5114C00
   Cstate_result  = reset
# lsnim -l lparaix61
lparaix61:
   class          = machines
   type           = standalone
   connect        = nimsh
   platform       = chrp
   netboot_kernel = 64
   if1            = 172_29_154 lparaix61 0
   cable_type1    = N/A
   Cstate         = ready for a NIM operation
   prev_state     = not running
   Mstate         = currently running
   cpuid          = 00C8E4244C00
   Cstate_result  = reset

Each NIM client has a spare disk available that we will use for the alternate disk migration.

root@lparaix53[/] > lspv
hdisk4          00f6050a2cd79ef8                    rootvg          active
hdisk5          00cdb511757d999e                    None
root@lparaix61[/] > lspv
hdisk4          00f6050a2cd79ef8                    rootvg          active
hdisk5          00c8e42485fabfd7                    None

The NIM Master is configured with only three volume groups, rootvg, nimvg and nimadmvg. The nimadmvg volume group will be used as temporary client data cache location for the 7.1 migrations. The volume group is currently empty.

# lsvg
rootvg
nimvg
nimadmvg
# lspv | grep adm
hdisk4          00c342c6395ff736                    nimadmvg        active
# lsvg -l nimadmvg
nimadmvg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT

Currently, nimadm requires rsh access to the NIM clients in order to function. Therefore we ensure that the NIM Master has rsh access to each of the clients and that NIM can communicate with each client successfully.

# rsh lparaix53 date
Fri May 18 14:06:07 EETDT 2011
# rsh lparaix61 date
Sat May 18 14:06:40 EETDT 2012
# nim -o lslpp lparaix53 | grep -w "bos.rte "
  bos.rte                   5.3.0.60  COMMITTED  Base Operating System Runtime
  bos.rte                   5.3.0.60  COMMITTED  Base Operating System Runtime
# nim -o lslpp lparaix61 | grep -w "bos.rte "
  bos.rte                    6.1.0.0  COMMITTED  Base Operating System Runtime
  bos.rte                    6.1.0.0  COMMITTED  Base Operating System Runtime
  bos.rte                    6.1.1.0  COMMITTED  Base Operating System Runtime
  bos.rte                    6.1.2.0  COMMITTED  Base Operating System Runtime
  bos.rte                    6.1.3.0  COMMITTED  Base Operating System Runtime
  bos.rte                    6.1.4.0  COMMITTED  Base Operating System Runtime
  bos.rte                    6.1.6.0  COMMITTED  Base Operating System Runtime
  bos.rte                   6.1.6.15  COMMITTED  Base Operating System Runtime

We will now initiate a migration for both systems. Starting with the 5.3 system, we run the following nimadm command on the NIM master to start the alternate disk-migration process.

NIMADM operation for AIX 5.3 system:

# nimadm -j nimvadmg -c lparaix53 -s spotaix710104 –l lpp_sourceaix710104 -d "hdisk5" –Y
Initializing the NIM master.
Initializing NIM client lparaix53.
Verifying alt_disk_migration eligibility.
Initializing log: /var/adm/ras/alt_mig/lparaix53_alt_mig.log
Starting Alternate Disk Migration.
+-----------------------------------------------------------------------------+
Executing nimadm phase 1.
+-----------------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 1.
Client alt_disk_install command: alt_disk_copy -j -M 7.1 -P1 -d "hdisk5"
Calling mkszfile to create new /image.data file.
Checking disk sizes.
Creating cloned rootvg volume group and associated logical volumes.
Creating logical volume alt_hd5
Creating logical volume alt_hd6
Creating logical volume alt_hd8
...ETC...

In a new secure shell (SSH) session on the NIM master we initiate a second nimadm operation to migrate our AIX 6.1 NIM client.

NIMADM operation for AIX 6.1 system:

# nimadm -j nimadmvg -c lparaix61 -s spotaix710104 –l lpp_sourceaix710104 -d "hdisk5" –Y
Initializing the NIM master.
Initializing NIM client lparaix61.
Verifying alt_disk_migration eligibility.
Initializing log: /var/adm/ras/alt_mig/lparaix61_alt_mig.log
Starting Alternate Disk Migration.
+-----------------------------------------------------------------------------+
Executing nimadm phase 1.
+-----------------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 1.
Client alt_disk_install command: alt_disk_copy -j -M 7.1 -P1 -d "hdisk5"
Calling mkszfile to create new /image.data file.
Checking disk sizes.
LOGICAL_VOLUME= hd11admin
FS_LV= /dev/hd11admin
Creating cloned rootvg volume group and associated logical volumes.
Creating logical volume alt_hd5
Creating logical volume alt_hd6
Creating logical volume alt_hd8
...ETC...

At this point the nimadm process is copying data from the clients, to the cache file systems on the NIM master, and performing the migration on the master itself. After the migration process for a client is complete, the data is copied back to the client’s alternate rootvg disk. If you are interested in learning more about each phase of the nimadm process, then again, I refer to my original nimadm article on the IBM Developer® website.

We observe that both clients now have a new volume group named altinst_rootvg. This volume group contains a copy of the original rootvg, now migrated to AIX 7.1.

root@lparaix53[/tmp] > lspv
hdisk4          00f6050a2cd79ef8                    rootvg          active
hdisk5          00cdb511757d999e                    altinst_rootvg  active
root@lparaix61[/] > lspv
hdisk4          00f6050a2cd79ef8                    rootvg          active
hdisk5          00c8e42485fabfd7                    altinst_rootvg  active

As the migrated data is being copied from the NIM master to the client, we observe that the alternate rootvg file systems are temporarily mounted on each client to receive the data.

root@lparaix53[/tmp] > df | grep alt
/dev/alt_hd4           262144    261384    1%       10     1% /alt_inst/dev/alt_hd11admin     262144    261448    1%        4     1% /alt_inst/admin/dev/alt_hd1           262144    261448    1%        4     1% /alt_inst/home/dev/alt_hd10opt       1048576   1047760   1%        4     1% /alt_inst/opt/dev/alt_hd3           262144    247936    6%        5     1% /alt_inst/tmp/dev/alt_hd2           7077888   7076152   1%        4     1% /alt_inst/usr/dev/alt_hd9var        524288    523552    1%        4     1% /alt_inst/var
root@lparaix61[/] > df | grep alt
/dev/alt_hd4           1048576   1047696   1%       10     1% /alt_inst/dev/alt_hd11admin     524288    523552    1%        4     1% /alt_inst/admin/dev/alt_hd1           524288    523552    1%        4     1% /alt_inst/home/dev/alt_hd10opt       1572864   1571968   1%        4     1% /alt_inst/opt/dev/alt_hd3           524288    508256    4%        5     1% /alt_inst/tmp/dev/alt_hd2           8388608   8386672   1%        4     1% /alt_inst/usr/dev/alt_hd9var        524288    523552    1%        4     1% /alt_inst/var

On the NIM Master we discover temporary cache file system mounts for each of the NIM clients. These cache file systems are housed in the nimadmvg volume group.

# df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd4           0.38      0.16   58%    11761    22% /
/dev/hd2           6.00      1.95   68%    93348    17% /usr
/dev/hd9var        2.38      2.02   15%     7620     2% /var
/dev/hd3           0.50      0.49    3%      187     1% /tmp
/dev/hd1           0.12      0.10   24%       18     1% /home
/dev/hd11admin      0.12      0.12    1%        7     1% /admin
/proc                 -         -    -         -     -  /proc
/dev/hd10opt       2.62      2.01   24%    12726     3% /opt
/dev/livedump      0.25      0.25    1%        4     1% /var/adm/ras/livedump
/dev/cglv         25.00     19.61   22%        6     1% /cg
/dev/lppsrclv     25.00     13.99   45%     7269     1% /lppsrc
/dev/spotlv       25.25     23.96    6%    29122     1% /spot
/dev/loop0         3.13      0.00  100%  1640871   100% /mnt/dvd1
/dev/loop1         2.26      0.00  100%  1183912   100% /mnt/dvd2
/dev/lv00          0.25      0.16   37%     7441    12% /lparaix53_alt/alt_inst
/dev/lv01          0.25      0.24    4%       18     1% /lparaix53_alt/alt_inst/admin
/dev/lv02          0.25      0.24    4%       25     1% /lparaix53_alt/alt_inst/home
/dev/lv03          0.75      0.29   62%     6718     4% /lparaix53_alt/alt_inst/opt
/dev/lv04          0.25      0.20   21%      225     1% /lparaix53_alt/alt_inst/tmp
/dev/lv05          3.50      0.73   80%    80350     9% /lparaix53_alt/alt_inst/usr
/dev/lv06          0.25      0.15   42%     7792    12% /lparaix53_alt/alt_inst/var
/dev/lv07          0.25      0.03   89%    13533    21% /lparaix61_alt/alt_inst
/dev/lv08          0.25      0.24    4%       17     1% /lparaix61_alt/alt_inst/admin
/dev/lv09          0.25      0.24    4%       23     1% /lparaix61_alt/alt_inst/home
/dev/lv10          0.75      0.61   19%     6409     4% /lparaix61_alt/alt_inst/opt
/dev/lv11          0.25      0.24    6%      107     1% /lparaix61_alt/alt_inst/tmp
/dev/lv12          4.25      0.26   94%   104878    10% /lparaix61_alt/alt_inst/usr
/dev/lv13          0.25      0.12   54%     9917    16% /lparaix61_alt/alt_inst/var
/usr/lib           6.00      1.95   68%    93348    17% /lparaix53_alt/alt_inst/usr/
lib/alt_mig/usr/lib
/usr/ccs/lib       6.00      1.95   68%    93348    17% /lparaix53_alt/alt_inst/usr/
lib/alt_mig/usr/ccs/lib
/lparaix53_alt/alt_inst      0.25 0.16 37% 7441    12% /lparaix53_alt/alt_inst/usr/
lpp/bos/inst_root
/lparaix53_alt/alt_inst/var  0.25 0.15 42% 7792    12% /lparaix53_alt/alt_inst/usr/
lpp/bos/inst_root/var
/lparaix53_alt/alt_inst/tmp  0.25 0.20 21% 225     1% /lparaix53_alt/alt_inst/usr/
lpp/bos/inst_root/tmp
/lparaix53_alt/alt_inst/home 0.25 0.24 4%  25     1% /lparaix53_alt/alt_inst/usr/
lpp/bos/inst_root/home
/lparaix53_alt/alt_inst/admin 0.25 0.24 4% 18 1% /lparaix53_alt/alt_inst/usr/
lpp/bos/inst_root/admin
# lsvg -l nimadmvg
nimadmvg:
LV NAME    TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
loglv00    jfs2log    1       1       1    open/syncd    N/A
lv00       jfs        1       1       1    open/syncd    /lparaix53_alt/alt_inst
lv01       jfs        1       1       1    open/syncd    /lparaix53_alt/alt_inst/admin
lv02       jfs        1       1       1    open/syncd    /lparaix53_alt/alt_inst/home
lv03       jfs        3       3       1    open/syncd    /lparaix53_alt/alt_inst/opt
lv04       jfs        1       1       1    open/syncd    /lparaix53_alt/alt_inst/tmp
lv05       jfs        14      14      1    open/syncd    /lparaix53_alt/alt_inst/usr
lv06       jfs        1       1       1    open/syncd    /lparaix53_alt/alt_inst/var
lv07       jfs        1       1       1    open/syncd    /lparaix61_alt/alt_inst
lv08       jfs        1       1       1    open/syncd    /lparaix61_alt/alt_inst/admin
lv09       jfs        1       1       1    open/syncd    /lparaix61_alt/alt_inst/home
lv10       jfs        3       3       1    open/syncd    /lparaix61_alt/alt_inst/opt
lv11       jfs        1       1       1    open/syncd    /lparaix61_alt/alt_inst/tmp
lv12       jfs        17      17      1    open/syncd    /lparaix61_alt/alt_inst/usr
lv13       jfs        1       1       1    open/syncd    /lparaix61_alt/alt_inst/var

Each of my migrations took around 60 minutes to complete. By using nimadm, that was an hour of downtime I was able to avoid.

We can review the associated nimadm log files for each client to verify the migration process was successful. You can monitor the migration by tailing each of the log files on the NIM master.

# cd /var/adm/ras/alt_mig
# ls –ltr *.log
total 7136
-rw-r--r--    1 root     system       420858 May 18 14:33 lparaix61_alt_mig.log
-rw-r--r--    1 root     system       429109 May 18 14:33 lparaix53_alt_mig.log
# tail –f lparaix53_alt_mig.log
 All rights reserved.
 US Government Users Restricted Rights - Use, duplication or disclosure
 restricted by GSA ADP Schedule Contract with IBM Corp.
. . . . . << End of copyright notice for bos.help.msg.en_US >>. . . .
Filesets processed:  566 of 2038  (Total time:  14 mins 7 secs).
installp:  APPLYING software for:
        bos.help.msg.en_US.com 7.1.1.0
# tail –f lparaix61_alt_mig.log
 All rights reserved.
 US Government Users Restricted Rights - Use, duplication or disclosure
 restricted by GSA ADP Schedule Contract with IBM Corp.
. . . . . << End of copyright notice for bos.msg.Ja_JP >>. . . .
Filesets processed:  418 of 2059  (Total time:  13 mins 17 secs).
installp:  APPLYING software for:
        bos.msg.JA_JP.rte 7.1.1.0

After the migration process is finished, the NIM master unmounts the cache file systems on the master and clients. We observe that the correct oslevel is returned at the end of the migration that is 7100-01.

lparaix53:

install_all_updates: Checking for recommended maintenance level 7100-01.install_all_updates: Executing /usr/bin/oslevel -rf, Result = 7100-01install_all_updates: Verification completed.install_all_updates: Log file is /var/adm/ras/install_all_updates.loginstall_all_updates: Result = SUCCESS
Known Recommended Maintenance Levels
...
Expanding /alt_inst/usr client filesystem.
Filesystem size changed to 8650752
Adjusting size for /var
Syncing cache data to client ...
+-----------------------------------------------------------------------------+
Executing nimadm phase 10.
+-----------------------------------------------------------------------------+
Unmounting client mounts on the NIM master.
forced unmount of /lparaix53_alt/alt_inst/var
forced unmount of /lparaix53_alt/alt_inst/usr
forced unmount of /lparaix53_alt/alt_inst/tmp
forced unmount of /lparaix53_alt/alt_inst/opt
forced unmount of /lparaix53_alt/alt_inst/home
forced unmount of /lparaix53_alt/alt_inst/admin
forced unmount of /lparaix53_alt/alt_inst
Removing nimadm cache file systems.
Removing cache file system /lparaix53_alt/alt_inst
Removing cache file system /lparaix53_alt/alt_inst/admin
Removing cache file system /lparaix53_alt/alt_inst/home
Removing cache file system /lparaix53_alt/alt_inst/opt
Removing cache file system /lparaix53_alt/alt_inst/tmp
Removing cache file system /lparaix53_alt/alt_inst/usr
Removing cache file system /lparaix53_alt/alt_inst/var
+-----------------------------------------------------------------------------+
Executing nimadm phase 11.
+-----------------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 3.
Client alt_disk_install command: alt_disk_copy -j -M 7.1 -P3 -d "hdisk5"
## Phase 3 ###################
Verifying altinst_rootvg...
Modifying ODM on cloned disk.
forced unmount of /alt_inst/var
forced unmount of /alt_inst/usr
forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/home
forced unmount of /alt_inst/admin
forced unmount of /alt_inst
Changing logical volume names in volume group descriptor area.
Fixing LV control blocks...
Fixing file system superblocks...
Bootlist is set to the boot disk: hdisk5 blv=hd5
+-----------------------------------------------------------------------------+
Executing nimadm phase 12.
+-----------------------------------------------------------------------------+
Cleaning up alt_disk_migration on the NIM master.
Cleaning up alt_disk_migration on client lparaix53.

lparaix61:

install_all_updates: Checking for recommended maintenance level 7100-01.install_all_updates: Executing /usr/bin/oslevel -rf, Result = 7100-01install_all_updates: Verification completed.install_all_updates: Log file is /var/adm/ras/install_all_updates.loginstall_all_updates: Result = SUCCESS
Known Recommended Maintenance Levels
...
Expanding /alt_inst/usr client filesystem.
Filesystem size changed to 8650752
Adjusting size for /var
Syncing cache data to client ...
+-----------------------------------------------------------------------------+
Executing nimadm phase 10.
+-----------------------------------------------------------------------------+
Unmounting client mounts on the NIM master.
forced unmount of /lparaix61_alt/alt_inst/var
forced unmount of /lparaix61_alt/alt_inst/usr
forced unmount of /lparaix61_alt/alt_inst/tmp
forced unmount of /lparaix61_alt/alt_inst/opt
forced unmount of /lparaix61_alt/alt_inst/home
forced unmount of /lparaix61_alt/alt_inst/admin
forced unmount of /lparaix61_alt/alt_inst
Removing cache file system /lparaix61_alt/alt_inst/admin
Removing cache file system /lparaix61_alt/alt_inst/home
Removing cache file system /lparaix61_alt/alt_inst/opt
Removing cache file system /lparaix61_alt/alt_inst/tmp
Removing cache file system /lparaix61_alt/alt_inst/usr
Removing cache file system /lparaix61_alt/alt_inst/var
+-----------------------------------------------------------------------------+
Executing nimadm phase 11.
+-----------------------------------------------------------------------------+
Cloning altinst_rootvg on client, Phase 3.
Client alt_disk_install command: alt_disk_copy -j -M 7.1 -P3 -d "hdisk5"
## Phase 3 ###################
Verifying altinst_rootvg...
Modifying ODM on cloned disk.
forced unmount of /alt_inst/var
forced unmount of /alt_inst/var
forced unmount of /alt_inst/usr
forced unmount of /alt_inst/usr
forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/home
forced unmount of /alt_inst/home
forced unmount of /alt_inst/admin
forced unmount of /alt_inst/admin
forced unmount of /alt_inst
forced unmount of /alt_inst
Changing logical volume names in volume group descriptor area.
Fixing LV control blocks...
Fixing file system superblocks...
Bootlist is set to the boot disk: hdisk5 blv=hd5
+-----------------------------------------------------------------------------+
Executing nimadm phase 12.
+-----------------------------------------------------------------------------+
Cleaning up alt_disk_migration on the NIM master.
Cleaning up alt_disk_migration on client lparaix61.

At this point we restart each of the NIM clients into AIX 7.1. You’ll observe that the bootlist for each client was modified by nimadm, so that the alternate rootvg is now the only disk in the boot list. With the clients successfully restarted on 7.1 the migration is now complete.

AIX Tunables post migration

After an AIX migration, I usually like to run the tuncheck command to verify the current tunable parameters are valid. One area that can indicate a tuning problem is the AIX error report. If you see the following messages in the errpt output, you might want to verify the current settings are valid:

IDENTIFIER TIMESTAMP  T C RESOURCE_NAME  DESCRIPTION
D221BD55   0523115112 I O perftune       RESTRICTED TUNABLES MODIFIED AT REBOOT
---------------------------------------------------------------------------
LABEL:          TUNE_RESTRICTED
IDENTIFIER:     D221BD55
Date/Time:       Wed May 23 11:51:16 EET 2012
Sequence Number: 676
Machine Id:      00C342C64C00
Node Id:         lparaix53
Class:           O
Type:            INFO
WPAR:            Global
Resource Name:   perftune
DescriptionRESTRICTED TUNABLES MODIFIED AT REBOOT
Probable Causes
SYSTEM TUNING
User Causes
TUNABLE PARAMETER OF TYPE RESTRICTED HAS BEEN MODIFIED
        Recommended Actions
        REVIEW TUNABLE LISTS IN DETAILED DATA
Detail Data
LIST OF TUNABLE COMMANDS CONTROLLING MODIFIED RESTRICTED TUNABLES AT REBOOT,
SEE FILE /etc/tunables/lastboot.logvmo

In the output above, you’ll notice that we are advised to check the /etc/tunables/lastboot.log for a modified restricted vmo tuning parameter. At this point I usually like to run the tuncheck command against the current /etc/tunables/nextboot file and review its output. As you can see, in the example below, we are warned that several restricted tunables are not set to their default values. These values might not be appropriate for your newly migrated AIX 7.1 (or 6.1) system. Settings that worked well with 5.3 are most likely no longer appropriate with 7.1.

# tuncheck -p -f /etc/tunables/nextboot
Warning: restricted tunable lrubucket is not at default valueWarning: restricted tunable strict_maxperm is not at default valueWarning: unknown parameter lru_file_repage in stanza vmoWarning: restricted tunable maxperm% is not at default valueWarning: restricted tunable maxclient% is not at default value
Checking successful

Based on the output above, the tuning for this newly migrated 7.1 system appears to be inappropriate. Unless we have a valid reason (which has been verified by IBM AIX support) we should set these tunables to their default AIX 7.1 settings.

You can reset individual tunables to their defaults using the –d flag and the corresponding tuning command. For example to set the maxperm% tunable to its default you would run the following vmo command:

# vmo -p -d maxperm%
Modification to restricted tunable maxperm%, confirmation required yes/no yes
Setting maxperm% to 90 in nextboot file
Setting maxperm% to 90
Warning: a restricted tunable has been modified

If you want to set all the vmo tunables back to their defaults you would run the following vmo command with the –D option:

#  vmo -r -D
Setting maxfree to 1088 in nextboot file
Setting minfree to 960 in nextboot file
Setting minperm% to 3 in nextboot file
Modification to restricted tunable maxperm%, confirmation required yes/no yes
Setting maxperm% to 90 in nextboot file
Modification to restricted tunable strict_maxperm, confirmation required yes/no yes
Setting strict_maxperm to 0 in nextboot file
Setting maxpin% to 80 in nextboot file
...etc...
Warning: some changes will take effect only after a bosboot and a rebootRun bosboot now? yes/no yes
bosboot: Boot image is 47016 512 byte blocks.

Note that setting all parameters to their defaults will require the bosboot command to be run and a reboot of the system for the changes to take effect.

The tundefault command can also reset tuning to default parameters. The command launches all the tuning commands (ioo, vmo, schedo, no, nfso, and raso) with the -D flag. This resets all the AIX tunable parameters to their default values. The –r flag defers the reset to default value to the next reboot. This clears the stanza(s) in the /etc/tunables/nextboot file and if necessary, runs bosboot and warns that a reboot is needed.

# tundefault -r

After the tunables have been reset, re-run the tuncheck command and ensure it runs without errors:

# tuncheck -p -f /etc/tunables/nextboot
Checking successful

You can verify when a systems tunables were lasted checked (with the tuncheck command) by reviewing the info stanza in the /etc/tunables/nextboot file. For example, the nextboot file (below) was last checked on 7thJune 2012 and the system was running AIX 7.1.

# IBM_PROLOG_BEGIN_TAG
# This is an automatically generated prolog.
#
# bos520 src/bos/usr/sbin/perf/tune/nextboot 1.1
#
# Licensed Materials - Property of IBM
#
# (C) COPYRIGHT International Business Machines Corp. 2002
# All Rights Reserved
#
# US Government Users Restricted Rights - Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
#
# IBM_PROLOG_END_TAG
info:        AIX_level = "7.1.1.1"        Kernel_type = "MP64"        Last_validation = "2012-06-07 12:29:54 EETDT (current, reboot)"
vmo:
nfso:
        portcheck = "1"
        nfs_use_reserved_ports = "1"

Unless you’ve permanently set restricted tunables in your /etc/tunables/nexboot file, the migration will change the systems default tuning to match the newer version of AIX. For example, we observed the following tuning changes on our AIX 5.3 system after migrating to 7.1.

  • The maxperm default value changed from 80 to 90:

    maxperm%80 80 80 1 100 % memory D
    maxperm%90 90 90 1 100 % memoryD
    
  • The minperm default value changed from 20 to 3:

    minperm%20 20 20 1 100 % memory D
    minperm%3 3 3 1 100 % memory D
    
  • Note that with AIX 7.1 lru_file_repage is hardcoded to 0 and removed from the list of vmo tunables. Please refer to the following document, for more information.

    Oracle Architecture and Tuning on AIX v2.20

  • As expected, we noted a number of new tunables with AIX 7.1. Some examples are shown below.

    vmm_klock_mode 2 2 2 0 3 numeric B
    j2_inodeCacheSize 200 200 200 1 1000 D
    

To learn more about these new tunables we can run the corresponding tuning utility with the –h flag to obtain usage information.

# vmo **-h** vmm_klock_mode

Help for tunable vmm_klock_mode:

Purpose:

Kernel locking prevents paging out kernel data. This improves system performance in many cases. If set to 0, kernel locking is disabled. If set to 1, kernel locking is enabled automatically if Active Memory Expansion (AME) feature is also enabled. In this mode, only a subset of kernel memory is locked. If set to 2, kernel locking is enabled regardless of AME and all of kernel data is eligible for locking. If set to 3, only the kernel stacks of processes are locked in memory. Enabling kernel locking has the most positive impact on performance of systems that do paging but not enough to page out kernel data or on systems that do not do paging activity at all. Note that 1, 2, and 3 are only advisory. If a system runs low on free memory and performs extensive paging activity, kernel locking is rendered ineffective by paging out kernel data. Kernel locking only has an impact on pageable page-sizes in the system.

Values:

Default: 2

Range: 0 – 3

Type: Bosboot

Unit: numeric

Tuning:

If processes are being delayed waiting for compressed memory to become available, increase ame_minfree_mem to improve response time. Note, this must be at least 257kb less than ame_maxfree_mem.

  • We also noticed a new subsystem, named aso. The Active System Optimizer (ASO) is a new AIX 7.1 (on POWER7 only) feature that autonomously tunes the allocation of system resources to improve performance.

    # lsitab -a | grep aso
    aso:23456789:once:/usr/bin/startsrc -s aso -e "NORMAL_RESPAWN_NOLOG"
    
  • New network (no) tuning options also appeared after the migration. The example below relates to TCP tuning for loopback network access.

    # no -a | grep tcp_fast
                   tcp_fastlo = 0     tcp_fastlo_crosswpar = 0
    
    # no -h tcp_fastlo
    

Help for tunable tcp_fastlo:

Purpose:

Specifies whether TCP fastpath loopback is enabled (1) or disabled (0).

Values:

Default: 0

Range: 0 – 1

Type: Connect

Unit: boolean

Tuning:

This option allows the TCP loopback traffic to shortcut the entire TCP/IP stack (protocol and interface) in order to achieve better performances.

no -h tcp_fastlo_crosswpar

Help for tunable tcp_fastlo_crosswpar:

Purpose:

Specifies whether TCP fastpath loopback between WPARs of a system is allowed (1) or forbidden (0).

Values:

Default: 0

Range: 0 – 1

Type: Connect

Unit: boolean

Tuning:

This option is valid only if TCP fastpath loopback is enabled (with tcp_fastlo option).

I highly recommend that you refer to the IBM AIX Version 7.1 Differences Guide Redbook publication for more information on the new features of the AIX 7.1 OS.

Considerations when migrating to AIX 7.1 (or 6.1)

  • Do you have JFS file systems in rootvg? If you do, please be aware that nimadm does not convert rootvg file systems from JFS to JFS2. I have requested that IBM include this feature with nimadm in the future. Starting with AIX 6.1 TL4, the alt_disk_copy utility has a new –T flag to convert rootvg file systems to JFS2 during the cloning process. Unfortunately nimadm does not currently call this flag with alt_disk_copy.

    If you are already running AIX 6.1 TL4 or higher, then you can use alt_disk_copy –T to convert rootvg file systems to JFS2 first and then use nimadm to migrate to AIX 7.1.

    If you are on AIX 5.3, the alt_disk_copy command does not have the –T flag. In this case you might want to migrate to AIX 7.1 (or 6.1) with nimadm first, and then use the alt_disk_copy –T command to convert rootvg file systems to JFS2.

  • You might need to upgrade your MPIO device driver software. For example, if you are using IBM SDDPCM, then you will need to uninstall the previous version of SDDPCM and then install the correct version of the software for your new version of AIX. You might be able to use pre and post-migration scripts to include this update as part of the overall AIX migration process. Please refer to the following link for an example.

    Using a post migration script with nimadm

  • As discussed in my previous nimadm article, multibos is not supported in nimadm environments. Before you start a nimadm migration make sure you have removed any old standby BOS instance and that your rootvg logical volumes are not using any bos_ LV names.

  • During our tests we found that even though we removed the standy instance (multibos –R), the nimadm process failed with the following error:

    +-----------------------------------------------------------------------------+
    Executing nimadm phase 11.
    +-----------------------------------------------------------------------------+
    Cloning altinst_rootvg on client, Phase 3.
    Client alt_disk_install command:
     alt_disk_copy -j -M 7.1 -P3 -d "hdisk1"
    ## Phase 3 ###################
    Verifying altinst_rootvg...
    alt_disk_copy: 0505-218 ATTENTION: init_multibos()
    returned an unexpected result.
    Cleaning up.
    forced unmount of /alt_inst/var/log
    forced unmount of /alt_inst/var/log
    forced unmount of /alt_inst/var
    forced unmount of /alt_inst/var
    forced unmount of /alt_inst/usr/local
    forced unmount of /alt_inst/usr/local
    forced unmount of /alt_inst/usr
    forced unmount of /alt_inst/usr
    forced unmount of /alt_inst/tmp
    forced unmount of /alt_inst/tmp
    forced unmount of /alt_inst/opt
    forced unmount of /alt_inst/opt
    forced unmount of /alt_inst/home
    forced unmount of /alt_inst/home
    forced unmount of /alt_inst/admin
    forced unmount of /alt_inst/admin
    forced unmount of /alt_inst
    forced unmount of /alt_inst
    0505-187 nimadm: Error cloning altinst_rootvg on client.
    Cleaning up alt_disk_migration on the NIM master.
    Cleaning up alt_disk_migration on client lpar1.
    Client alt_disk_install command: alt_disk_install -M 7.1 -X
    Bootlist is set to the boot disk: hdisk0 blv=hd5
    

    We also found the init_multibos error in the /var/adm/ras/alt_disk_inst.log file on the NIM client:

    Tue Nov 22 14:48:12 EETDT 2011
    cmd: /ALT_MIG_SPOT/sbin/alt_disk_copy -j -M 7.1 -P3 -d hdisk1
    Verifying altinst_rootvg...
    alt_disk_copy: 0505-218 ATTENTION: init_multibos() returned an unexpected result.
    Cleaning up.
    

    Given that the error appeared to be related to init_multibos, we assumed the failure was due to some multibos checks being performed by alt_disk_copy on the client. The client system did not have an existing multibos standby instance. So, we tried two things: First we created a standby instance on the client (multibos –s –X) and re-tried the nimadm operation. This failed. Next we removed the standby instance (multibos –R) and re-tried the nimadm operation. This worked and the client then migrated to AIX 7.1 successfully. We re-tried the same operations (that is, create standby instance, remove standby instance and nimadm) several times and each worked as expected.

    Unfortunately, it appears that ‘multibos –R‘ may not clean up the /bos_inst directory. If this directory exists the nimadm operation will most likely fail. The simple fix was (in our case) to remove the /bos_inst directory before attempting the AIX migration.

    # rm –r /bos_inst
    
  • If you plan on using your AIX 7.1 NIM master to migrate your AIX 5.3 clients to AIX 6.1, then make sure that you install the AIX 7.1 bos.alt_disk_install.rte fileset into the AIX 6.1 SPOT resource first. Failure to do so will result in your nimadm operation reporting the following error message:

    # nimadm -j nimadmvg -c lparaix53 -s spotaix610605 -l
    lpp_sourceaix610605_NEW -d hdisk2 -Y
    Initializing the NIM master.
    Initializing NIM client lparaix53.
    0042-001 nim: processing error encountered on "master":
       /usr/bin/lslpp: Fileset bos.alt_disk_install.rte not installed.
    0505-204 nimadm: SPOT spotaix610605_NEW does not have
    bos.alt_disk_install.rte installed.
    0505-205 nimadm: The level of bos.alt_disk_install.rte installed in SPOTspotaix610605_NEW (0.0.0.0) does not match the NIM master's level (7.1.1.0).
    Cleaning up alt_disk_migration on the NIM master.
    

    You must install the AIX 7.1 bos.alt_disk_install.rte fileset into your AIX 6.1 SPOT resource.

    # smit nim_res_op
    ....etc...
    > spotaix610605
    Customize a SPOT
    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.
                                                         [Entry Fields]
    * Resource Name                                       spotaix610605
    * Source of Install Images                           [lpp_sourceaix710104] +
      Fileset Names                                      [bos.alt_disk_install.rte]
    

    You can verify the correct fileset is installed in your 6.1 SPOT using the following nim command:

    # nim -o showres spotaix610605 | grep bos.alt_disk_install.rte
      bos.alt_disk_install.rte7.1.1.15    A     F    Alternate Disk Installation
    
  • Some administrators, migrating from AIX 5.3 to 6.1, reported issues with certain device filesets left behind after the migration. They identified there was an issue when the lppchk command returned errors after migration. To resolve the problem, IBM support advised that certain filesets be uninstalled and that several entries be deleted from the ODM. Please refer to the following link for further information.

  • Another 6.1 related issue we discovered was that after migrating, the sys0 maxuproc attribute was returned to its default value. This resulted in performance issues and application crashes. Check your maxuproc value before and after the migration and ensure it has not changed. We did not experience this issue with AIX 7.1 migrations. Please refer to the following blog post for more information.

    AIX 6.1 migration: iostat and maxuproc change to their defaults?

  • Read the latest AIX 7.1 installation tips document when planning your migrations. It can help you resolve and/or avoid issues either before, during or after the upgrade. For example:

    7100-01 Installation

    When applying the 7100-01 Technology Level with Service Pack 1 included, you will have to run smitty update_all a second time to update bos.aso and mcr.rte. Until this is done, the oslevel command will not indicate the correct level.

  • I recommend you migrate to the latest TL and SP for AIX 7.1 (or 6.1) to avoid any known issues. For example, SP3 for 7.1 TL1 is vulnerable to an issue where the netstat –f command can crash an LPAR. SP4 for 7.1 TL1 contains APAR IV09942 which resolves this problem. We hit this problem during our migrations. One of our system monitoring tools was regularly running the netstat command, which resulted in a number of unwanted system outages on some systems.

  • Customers using IBM DB2® Versions 8.2, 9.1, 9.5 or 9.7 should apply the following ifixes when applying SP4 for AIX 7.1 TL1 or AIX 6.1 TL7. Use APAR IV22132 for AIX 7.1 TL1 SP4 and APAR IV22062 for AIX 6.1 TL7 SP4. Please refer to the following website for further information:

    Known issues for DB2 for Linux, UNIX and Windows on AIX 5.2, 5.3, 6.1, and 7.1

  • Before applying AIX 7.1 TL1 SP4 to your system, make sure you have removed any legacy tuning from your system. During our tests we discovered that if we left legacy AIX 5.2 or 5.3 tuning in place the LPAR would hang during restart at “Setting tunable parameters”. The following entries in our nextboot file were removed and the system started OK. This issue only appeared with systems migrated to SP4. Systems on AIX 7.1 TL1 SP3 did not exhibit this problem however we still removed the legacy tuning as it was inappropriate for 7.1.

    info:
            AIX_level = "5.2.0.31"
            Kernel_type = "MP64"
            Last_validation = "2004-12-08 13:48:10 EETDT (current, reboot)"
    vmo:
            lrubucket = "262144"
            strict_maxclient = "1"
            strict_maxperm = "1"
            lru_file_repage = "0"
            minperm% = "2"
            maxperm% = "5"
            maxclient% = "5"
            maxfree = "1200"
    ioo:
            aio_maxreqs = "12288"
            j2_maxPageReadAhead = "512"
            maxpgahead = "8"
    no:
            use_isno = "0"
            udp_sendspace = "65536"
            udp_recvspace = "65536"
            tcp_sendspace = "262144"
            tcp_recvspace = "262144"
            rfc132
    

    You might want to simply move the existing nextboot file “out of the way” prior to migrating. You can then review the file post migration. For example:

    # mv /etc/tunables/nextboot /etc/tunables/nextboot.old
    
  • For special considerations for SSH host keys and AIX migrations, refer to this blog.

Note: As of AIX Version 7.1, the 32-bit kernel has been deprecated. Therefore, 64-bit hardware is required to run AIX Version 7.1 (IBM POWER4™, POWER5™, POWER6™ or POWER7 systems only).

Help: Migrating to AIX Version 7.1

Resources

Chris Gibson

How likely are you to recommend IBM Developer to a friend or colleague? What can we do to improve your experience? How would you describe IBM Developer to a friend or colleague?

  • 0
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
Not at all likely
Extremely likely

Thank you for answering this Poll. Your feedback is highly appreciated!