Setting Up A Network To Improve Live Partition Mobility Performance
I
stumbled across this IBM support tech note today. It relates to controlling
which network to use during a Live Partition Mobility operation. http By being given the choice of which network to use we can select network adapters that are more suited to this type of activity i.e. ones that are one or more of the following: high-speed, under utilized and/or dedicated to non-application data (for example). Using a “high speed” network may improve the performance of an LPM operation and reduce the overall time required to perform the memory copy between systems. The diagram in the tech note shows a
dedicated Gigabit network between the source and destination VIOS. I found
the following point quite interesting: "NOTE:
Currently
transfer of data for 1 partition does not scale much
beyond 1 Gbit speed,
10Gibt does not provide benefit for
parallel migrations." For those of us who were planning on using
their 10GB network adapters for LPM traffic (like I was), we may not need
bother at the moment. I wonder if this will change in the future?
|
FYI - Other areas of interest, with respect to LPM tunables. Live Partition Mobility – New with VIO 2.2.2.0. New pseudo device to support LPM tuneables.The pseudo device vioslpm0 is created by default when VIO server version 2.2.2.0 or higher is installed. Device attributes for vioslpm0 can be used to control live partition mobility operations. Use normal lsdev/chdev padmin commands to query/change attributes. Example $ lsdev –dev vioslpm0 –attr, $ chdev –dev vioslpm0 –attr cfg_msp_ops=5. The following attributes of pseudo device can be modified by using the migrlpar command: num_active_migrations_configured, concurr_migration_perf_level. Run the following HMC command to modify the attribute values of the pseudo device, for example to set the number of active migrations to 8 run: migrlpar -o set -r lpar -m <CecName> -p <lparName> -i "num_active_migrations_configured=8“. The default value for this attribute is 4. To run the maximum number of supported partition mobility operations on the Virtual I/O Server (VIOS), this value must be set the supported maximum number. To set the amount of resources allocated for each mobility operation to a value of 2, run the following command: migrlpar -o set -r lpar -m <CecName> -p <lparName> -i "concurr_migration_perf_level=2".
Thanks fot the feedback Chris. I'll setup a small lab to test these vasi0 attributes and effects on LPM speeds. Regards, Patrick
Hi Patrick, the document is now over 3 years old! There have been many enhancements to LPM in that time! Thanks for providing your feedback. It's always good to hear about others experience with this technology. Thanks for sharing. I've asked for the document to be updated. I've not seen any advice for tuning vasi0. I have, however, seen recommendations for tuning the virtual Ethernet adapter associated with an SEA and on the VIOC. https://www.ibm.com/developerworks/community/blogs/cgaix/entry/tuning_virtual_ethernet_adapters_for_even_better_backup_performance?lang=en
Hi Chris, We have passed from 1Gb to 10Gb cards for our IVM/VIOS servers. LPMs are now transferring with peaks around 290MB/sec compared to the previous 110MB. We're getting aggregated times for a 12GB of memory LPM at 160MB/sec compared to 60MB/sec. Overall 2.65 times faster. Not bad. But, I'm looking at tuning the vasi0 to increase performance? root@vios # lsattr -El vasi0 medium_buf_max 256 Maximum Medium Buffers True medium_buf_size 8192 Medium Buffer Size (in bytes) False rx_max_pkts 50 Maximum Received Packets Per Interrupt True small_buf_max 2048 Maximum Small Buffers True small_buf_size 2048 Small Buffer Size (in bytes) False tx_buf_min 512 Maximum Transmit Buffers True tx_buf_size 16384 Transmit Buffer Size (in bytes) False Anybody got a chance to play around with this? Patrick