Just say you change the queue_depth on a hdisk with chdev P. This updates the devices ODM information only, not its running configuration. The new value will take effect next time I reboot the system. So now I have a different queue_depth in the ODM compared to the devices current running config (in the kernel).

What if I forget that Ive made this change to the ODM and forget to reboot the system for many months? Someone complains of an I/O performance issue....I check the queue_depths and find they appear to be set appropriately but I still see disk queue full conditions on my hdisks. But have I rebooted since changing the values?

How do I know if the ODM matches the devices running configuration?

For example, I start with a queue_depth of 3, which is confirmed by looking at lsattr (ODM) and kdb (running config) output:

# lsattr -El hdisk6 -a queue_depth

queue_depth 3 Queue DEPTH True

# echo scsidisk hdisk6 | kdb | grep queue_depth

ushort queue_depth = 0x3; < In Hex.

Now I change the queue_depth using chdev P i.e. only updating the ODM.

# chdev -l hdisk6 -a queue_depth=256 -P

hdisk6 changed

# lsattr -El hdisk6 -a queue_depth

queue_depth 256 Queue DEPTH True

kdb reports that the disks running configuration still has a queue_depth of 3.

# echo scsidisk hdisk6 | kdb | grep queue_depth

ushort queue_depth = 0x3;

Now if I varyoff the VG and change the disk queue_depth, both lsattr (ODM) and kdb (the running config) show the same value:

# umount /test

# varyoffvg testvg

# chdev -l hdisk6 -a queue_depth=256

hdisk6 changed

# varyonvg testvg

# mount /test

# lsattr -El hdisk6 -a queue_depth

queue_depth 256 Queue DEPTH True

# echo scsidisk hdisk6 | kdb | grep queue_depth

ushort queue_depth = 0x100; < In Hex = Dec 256.

# echo "ibase=16 ; 100" | bc

256

This is one way of checking youve rebooted since you changed your queue_depth attributes. Ive tried this on AIX 6.1 and 7.1 only.

http://publib.boulder.ibm.com/infocenter/aix/v7r1/topic/com.ibm.aix.kdb/doc/kdb/kdb_pdf.pdf