IC SunsetThe developerWorks Connections platform will be sunset on December 31, 2019. On January 1, 2020, this blog will no longer be available. More details available on our FAQ.

Comments (23)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

Comments (23)

JJVasquez commented Feb 19 2017 Conversation Permalink

HI Chris, In general, I can consider good practice, adjust the maximum and minimum values of each parameter to maximum value allowed?
for example:
min_buf_control 128 max_buf_control 128
min_buf_huge 128 max_buf_huge 128
min_buf_large 256 max_buf_large 256
min_buf_medium 2048 max_buf_medium 2048
min_buf_small 4096 max_buf_small 4096
min_buf_tiny 4096 max_buf_tiny 4096

I appreciate the answer
Thank you very much

cggibbo commented Feb 22 2017 Conversation Permalink

IBM support will advise you to tune these values only when there are resource shortages. However, I've seen many customers set these values to their maximums on all of their VIOS and VIO clients. They have not had any problems. You could change these values and carefully monitor your system for any negative impact and undo the changes if necessary.

KarlM5 commented Apr 15 2016 Comment Permalink

To answer my own question
Tiny Small Medium Large Huge
512 2k 16k 32k 64k

KarlM5 commented Apr 14 2016 Comment Permalink

Hi Chris, do you know what the actual size of the buffers are so that I can have a view on how much additional memory is being consumed by increasing the minimum?

uncle_ziba commented Dec 25 2015 Conversation Permalink

We have maxed both min and max for each of the buffers and that helped with getting rid of the "no resource" errors.
This needed to be done on both the VIO SEA virtual adapters as well as on the client lpars.

I'm curious about the additional memory footprint required for these buffers.
Is there any harm by setting these attributes to the max values on every LPAR?

cggibbo commented Jan 19 2016 Conversation Permalink

IBM support will advise you to tune these values only when there are resource shortages. However, I've seen many customers set these values to their maximums on all of their VIOS and VIO clients. They have not had any problems. You could change these values and carefully monitor your system for any negative impact and undo the changes if necessary.

TimBatten commented Nov 10 2015 Comment Permalink

Thanks Chris! This fixed my TSM VIOC's backup issue to EMC Data Domain over NFS. It started crawling at 3MB/sec or less after a Data Domain upgrade. 'netstat -v' on the VIOS showed dropped packets, no resource errors, and Hypervisor send/receive failures; also, I had reached max buffers for Small. After finding this post, I changed the value of max_buf_small from 2048 to 4096 for my VIO servers. Now I'm running ~150MB/sec sustained. This will go in the tool box.

Jame5.H commented Feb 26 2015 Comment Permalink

Thanks for the quick response Chris ... Yes, I fully understand as whilst I was waiting for your reply I was doing some more research (and even tested) that I can't go above 4096 max buffers. I have looked at the VIO servers and they don't exhibit the issue and most tuning recommendations I've come across have been applied on the VIOS & VIOC. We will keep looking into what our options are as we have more CPU / memory coming soon and in the meantime have IBM working on a PMR. Cheers.

cggibbo commented Feb 26 2015 Comment Permalink

Hi James, 4096 is the max you can set for min/max on max_buf_small. # lsattr -Rl ent0 -a max_buf_small 512...4096 (+1) # lsattr -Rl ent0 -a min_buf_small 512...4096 (+1) There's some good information on this in the AIX Performance FAQ under section 9.11.2 Virtual Ethernet Adapter Buffers. "A buffer shortage can have multiple reasons. One reason would be that the LPAR does not get sufficient CPU resources because the system is heavily utilized or the LPAR significantly over commits its entitlement. Another possibility would be that the number of virtual Ethernet buffers currently allocated might be too small for the amount of network traffic through the virtual Ethernet. For systems with a heavy network load it is recommended setting the minimum buffers to the same value as the maximum buffers. This will prevent any dynamic buffer allocation and the LPAR will always have all buffers available. Note: It is recommended to first check the CPU usage of the LPAR before making any virtual Ethernet buffer tuning." http://www-03.ibm.com/systems/power/software/aix/whitepapers/perf_faq.html The above may apply to both the client partition and the VIOS virtual Ethernet adapters. So you should also check the entX buffers for the Shared Ethernet Adapter trunk adapters on the VIOS.

Jame5.H commented Feb 25 2015 Comment Permalink

Hi Chris, we too have a current PMR open with IBM for an lpar running TSM 7.1.1.100 on AIX 7.1.3.4. I stumbled across this issue whilst waiting for IBM to analyse 'snaps' sent to them and have a quick question. My TSM lpar currently has ent0 max_buf_small=4096 and the Max Allocated History has reached the 4096, all other buffer sizes appear ok. What should be the next natural progression to test? Do I just double it to 8192 and monitor? Should I go up in 1k/2k increments? Is there a general rule of thumb here for the amount to increase the Max Allocated buffer sizes? Cheers, James.

cggibbo commented Feb 18 2015 Comment Permalink

Hi Alberto, yes you may very well need to perform similar tuning for the virtual Ethernet adapters on the VIOS. Use entstat to check for similar buffer shortages and consider increasing the min values were appropriate.

AlbertoPre commented Feb 18 2015 Comment Permalink

Hi Chris, I have a similar problem. In the Lpar client I find the same errors as you describe but in the VIO server, there are a huge amount of Packets Dropped and Hypervisor Send Failures but not "No resource errors" or Hypevisor Rceive Failures. Do you think I should increase the buffers in both, VIO server and client ? Many thanks, Alberto