Understanding viostat -adapter vfchost output.The first
time I ran the ’viostat –adapter’ command I expected to find
non-zero values for kbps, tps, etc, for each vfchost adapter. However, the values were always zero, no matter
how much traffic traversed the adap $ viostat
-adapter 1 1 .. vada vfch .. vada vfch I wondered if this was expected behaviour. Was the output supposed to report the amount of pass-thru traffic per vfchost? In 2011, I posed this question on the IBM developerWorks PowerVM forum regarding this observation. One of the replies stated:
"viostat does not give statistics for NPIV devices. The vfchost adapter is just a passthru, it doesn't know what the commands it gets are." http
I appreciated someone taking the time to answer my question but I was still curious. I tested the same command again (in 2013) on a recent VIOS level (2.2.2.1), but I received the same result. It was time to source an official answer on this behaviour.
Here is the official response I received from IBM:
1. FC adapter stats in viostat/iostat do not include NPIV. 2. viostat & iostat are an aggregate of all the stats from the underlying disks, which of course NPIV doesn't have.
There's really no way for the vfchost adapter to monitor I/O,
since it doesn't know what the commands it gets are. He's just a passthru,
passing the commands he gets from the client directly to the physical FC
adap 3. You can run fcstat on the VIOS but that has the same issues/limitations mentioned above.
Intent here was that customers would use tools on the client to monitor this sort of thing.
To summarize
the comments from Deve viostat does NOT give statistics for NPIV devices.”
This made sense but I wondered why the tool hadn’t been changed to exclude vfchost adapters from the output (to avoid customer confusion). There's obviously no valid reason to ever display any information for this type of adapter. I also understood that it was expected that I/O would be monitored at the client LPAR level. But I must say that an option for monitoring VFC I/O from a VIO server would be advantageous i.e. a single source view of all I/O activity for all VFC clients; particularly when there are several hundred partitions on a frame. The response was:
“…the way
the vfchost driver currently works is that it calls iostadd to register a
dkstat structure, resulting in the adapter being listed when viostat is
called. This is misleading, however,
since the vfchost driver does not actually track I/O. The commands coming from the client partition
are simply passed as-is to the physical FC adapter, and we don't know if a
particular command is an I/O command or not.
The iostadd call is left over from porting the code from the vscsi
driver, and Development agrees it should probably have been removed before
shipping the code There has also been mention of a DCR #MR0413117456 (Title: FC adapter stats in viostat/iostat does not include NPIV) which you can follow-up with Marketing to register your interest/track progress if that is something you're interested in pursuing.” |
To monitor thruput on real FC adapters in a VIOS when using NPIV (or a combination of NPIV and other IO) there are 3 methods I am aware of: 1) $ nmon -> then press ^ 2) nmon recordings run thru the NMON analyzer 3) fcstat The fcstat command only provides a point in time total since boot so using it requires doing some math.
Hmmm, I tried 'fcstat -client' and it dumps core (segmentation fault). Apparently this is a known issue and will not be fixed until the next release?! My VIOS is running 2.2.2.2 with all the latest ifixes. Oh well, I'll wait for the next release and try again.
i was keep looking this options , even i open an pmr with IBM , but did not get any success , however recently we update the vio server with 2.2.2.2 version now atleat i can see the data bu using fcstat -client dev hostname inreqs outreqs ctrlreqs inbytes outbytes DMA_errs Elem_errs Comm_errs fcs0 g44ulpvio01 2183828752 384513025 25624392 60940090523140 3588210709504 0 11 0 vioclh1 186311 7507823 197224 2694179376 36862984192 0 11 0 vioclp1 164940 7284093 392638 7673730752 48371888128 0 11 0 vioclp10 210594 14201982 441885 8396580552 81027809280 0 11 0 vioclp12 111960528 70227399 3341929 5434777307656 892747074560 0 11 0 vioclp13 54475 16135687 391802 4424845352 98602090496 0 11 0 vioclp14 22558 7267168 595031 322337916 160177275904 0 11 0 vioclp15 55048 7252357 395985 4538679008 47578615808 0 11 0 vioclp16 1044655 6622777 390942 53065186320 43732930560 0 11 0 vioclp17 37390 6395441 196246 3329114200 37409862656 0 11 0 vioclp19 99236 15262406 392770 4561026384 107670882304 0 11 0 vioclp2 143681 8817110 1190092 6399742452 82582437376 0 11 0 vioclp20 29896 15432883 392497 2639645488 84494008320 0 11 0 vioclp21 2265514744 39899348 537372 56702015403360 332776955904 0 11 0 vioclp26 44364 16475904 594872 5305808032 87077830656 0 11 0 vioclp4 40310691 32066164 9253079 2098612963024 634713994752 0 11 0 vioclp5 6809 6586072 187389 89813348 29625901056 0 11 0 vioclp7 56815 4231806 793321 2948891532 27508154880 0 11 0 vioclp8 29766 16951587 197353 2620692352 106339822592 0 11 0 vioclp9 42099 1876487 138244 919828604 13433035264 0 11 0 viocls1 354613 6669017 517476 21941404624 42453999616 0 11 0 viocls10 99604 15931965 2182505 12014862056 87720435712 0 11 0 viocls12 15803 3094414 196435 248736968 15504397824 0 11 0 viocls16 44796 1623694 196784 2254096944 11001445376 0 11 0 viocls19 116891 37908553 419141 13558015120 223911700992 0 11 0 viocls4 111826 650701 21847 1618506600 13125791232 0 11 0 viocls6 22523767 10395860 2567815 3747829768456 77108234240 0 11 0 viocls7 20797 903451 395875 1838671024 5462908928 0 11 0 viocls8 0 0 16 0 0 0 11 0 fcs1 g44ulpvio01 540 0 89 185400 0 0 0 0 vioclp4 0 0 26 0 0 0 0 0
Thats Infromative Chris.. So The only way for us is to summarize the usage on all of the Client LPAR's and identify how much IO is being transferred through a particular FC adapter at the VIOS. Another way that I can think of is, to get the statistics from your SAN Admin at the Switch port level. It will be real quick if you are managing the SAN too :)