I finally got access to a system where I could test RootVG WPARs a little more extensively than I have in the past.

If you are not sure what a RootVG WPAR is then I suggest you take a look at the following IBM documentation:

http://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.wpar/rootvg-wpars.htm

I performed the following test today, just to prove a theory. The result was interesting, so I thought I would share it with you on my blog.

First, I started with an AIX 7.1 (7100-00-01-1037) system that had one spare vscsi disk, hdisk5.

Global# lspv

hdisk4 00f6050a2cd79ef8 rootvg active

hdisk5 00f602736c1a2ad8 None

I then created a volume group, logical volume and file system on hdisk5.

Global# mkvg -fy cgvg hdisk5

cgvg

Global# mklv -t jfs2 -ycglv cgvg 1

cglv

Global# crfs -vjfs2 -Ayes -d cglv -m /cg

File system created successfully.

65328 kilobytes total disk space.

New File System size is 131072

Global# mount /cg

Global# df -m /cg

Filesystem MB blocks Free %Used Iused %Iused Mounted on

/dev/cglv 64.00 63.67 1% 4 1% /cg

Global# lsvg -l cgvg

cgvg:

LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT

cglv jfs2 1 1 1 open/syncd /cg

loglv00 jfs2log 1 1 1 open/syncd N/A

I attempted to create a RootVG WPAR on hdisk5. Expecting (hoping) this would fail, as I still had an active file system in the VG.

It failed stating it was unable to overwrite hdisk5. Good.

Global# lswpar

Global# mkwpar -D rootvg=yes devname=hdisk5 -n rootvgwpar1

Creating workload partition's rootvg. Please wait...

mkwpar: 0960-621 Failed to create a workload partition's rootvg. Please use -O flag to overwrite hdisk5.

If restoring a workload partition, target disks should be in available state.

So I tried the command again, this time with the O flag as suggested in the error message. It failed again stating that it could not remove cgvg from hdisk5. This was also good. At this point, an experienced AIX admin would look for the existence of a volume group on hdisk5. However, a junior AIX admin might not! :)

Global# mkwpar -O -D rootvg=yes devname=hdisk5 -n rootvgwpar1

mkwpar: 0960-620 Failed to remove cgvg on disk hdisk5.

Global# lsvg -l cgvg

cgvg:

LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT

cglv jfs2 1 1 1 open/syncd /cg

loglv00 jfs2log 1 1 1 open/syncd N/A

But what if the file systems were already unmounted prior to running mkwpar with the O flag. What would happen?

So, to test my theory, I unmounted my file system so that the logical volumes in cgvg were all now closed.

Global# umount /cg

Global# lsvg -l cgvg

cgvg:

LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT

cglv jfs2 1 1 1 closed/syncd /cg

loglv00 jfs2log 1 1 1 closed/syncd N/A

This time the mkwpar command (with O) happily blew away my logical volume, file system and volume group! Whoops! :)

Global# mkwpar -O -D rootvg=yes devname=hdisk5 -n rootvgwpar1

Creating workload partition's rootvg. Please wait...

mkwpar: Creating file systems...

/

/admin

...etc...

devices.common.IBM.scsi.rte 7.1.0.0 ROOT APPLY SUCCESS

devices.fcp.disk.array.rte 7.1.0.0 ROOT APPLY SUCCESS

devices.fcp.disk.rte 7.1.0.0 ROOT APPLY SUCCESS

devices.fcp.tape.rte 7.1.0.0 ROOT APPLY SUCCESS

devices.scsi.disk.rte 7.1.0.0 ROOT APPLY SUCCESS

devices.tty.rte 7.1.0.0 ROOT APPLY SUCCESS

bos.mp64 7.1.0.0 ROOT APPLY SUCCESS

bos.mp64 7.1.0.1 ROOT APPLY SUCCESS

bos.net.tcp.client 7.1.0.0 ROOT APPLY SUCCESS

bos.net.tcp.client 7.1.0.1 ROOT APPLY SUCCESS

bos.perf.tune 7.1.0.0 ROOT APPLY SUCCESS

perfagent.tools 7.1.0.0 ROOT APPLY SUCCESS

bos.net.nfs.client 7.1.0.0 ROOT APPLY SUCCESS

bos.wpars 7.1.0.0 ROOT APPLY SUCCESS

bos.wpars 7.1.0.1 ROOT APPLY SUCCESS

bos.net.ncs 7.1.0.0 ROOT APPLY SUCCESS

wio.common 7.1.0.0 ROOT APPLY SUCCESS

Finished populating scratch file systems.

Workload partition rootvgwpar1 created successfully.

mkwpar: 0960-390 To start the workload partition, execute the following as root: startwpar [-v] rootvgwpar1

Global# lsvg

rootvg

Global# lspv

hdisk4 00f6050a2cd79ef8 rootvg active

hdisk5 00f602736c1a2ad8 None

Global# lswpar -D

Name Device Name Type Virtual Device RootVG Status

-----------------------------------------------------------------------

rootvgwpar1 /dev/null pseudo EXPORTED

rootvgwpar1 /dev/tty pseudo EXPORTED

rootvgwpar1 /dev/console pseudo EXPORTED

rootvgwpar1 /dev/zero pseudo EXPORTED

rootvgwpar1 /dev/clone pseudo EXPORTED

rootvgwpar1 /dev/sad clone EXPORTED

rootvgwpar1 /dev/xti/tcp clone EXPORTED

rootvgwpar1 /dev/xti/tcp6 clone EXPORTED

rootvgwpar1 /dev/xti/udp clone EXPORTED

rootvgwpar1 /dev/xti/udp6 clone EXPORTED

rootvgwpar1 /dev/xti/unixdg clone EXPORTED

rootvgwpar1 /dev/xti/unixst clone EXPORTED

rootvgwpar1 /dev/error pseudo EXPORTED

rootvgwpar1 /dev/errorctl pseudo EXPORTED

rootvgwpar1 /dev/audit pseudo EXPORTED

rootvgwpar1 /dev/nvram pseudo EXPORTED

rootvgwpar1 /dev/kmem pseudo EXPORTED

rootvgwpar1 hdisk5 disk hdisk0 yes EXPORTED

Global# startwpar -v rootvgwpar1

Starting workload partition rootvgwpar1.

Mounting all workload partition file systems.

Mounting /wpars/rootvgwpar1

Mounting /wpars/rootvgwpar1/etc/objrepos/wboot

Mounting /wpars/rootvgwpar1/opt

Mounting /wpars/rootvgwpar1/usr

Loading workload partition.

Exporting workload partition devices.

hdisk5 Defined

Exporting workload partition kernel extensions.

Starting workload partition subsystem cor_rootvgwpar1.

0513-059 The cor_rootvgwpar1 Subsystem has been started. Subsystem PID is 2818228.

Verifying workload partition startup.

Return Status = SUCCESS.

Global# lswpar

Name State Type Hostname Directory RootVG WPAR

-----------------------------------------------------------------------

rootvgwpar1 A S rootvgwpar1 /wpars/rootvgwpar1 yes

Global# clogin rootvgwpar1

rootvgwpar1# lspv

hdisk0 00f602736c1a2ad8 rootvg active

rootvgwpar1# lsvg -l rootvg

rootvg:

LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT

hd4 jfs2 2 2 1 open/syncd /

hd11admin jfs2 1 1 1 open/syncd /admin

hd1 jfs2 1 1 1 open/syncd /home

hd3 jfs2 2 2 1 open/syncd /tmp

hd9var jfs2 2 2 1 open/syncd /var

This could be a trap for first time users of RootVG WPARs! So look out! :)

Apparently this is working as designed, as the O flag is really only meant to be called by WPAR tools such as the WPAR Manager.

The man page for mkwpar states:

-O This flag is used to force the overwrite of an existing volume group on the given set of devices specified with the -D rootvg=yes flag directive. If not specified, the overwrite value defaults to FALSE. This flag should only be specified once, for its setting will be applied to all devices specified with the -D rootvg=yes flag directive.

Please take care when using the O flag with the mkwpar command.