Last few months months I’m struggling with a speed of a virtual Ethernet adapters. Just buying and configuring 10Gbit adapters in the Shared Ethernet Adapter (SEA) will not boost your network speed to the network heaven. I give you few examples how the speed/bandwidth change for a VLAN, depending from the configuration.
Maybe some of you know about the limitation in the Phyp bandwidth. If you use 10Gbit physical adapter and configure the Shared Ethernet Adapter, you will not get 10Gbit throughput to the external network with default settings. You will hit the limitation around 1,5 Gbit/s – 2 Gbit/s. Niggel documented this very well few years ago here.
Well, many people told me ‘this is old, not true anymore, latest firmware already boost the speed’. Unfortunately, this is still partially true.
In September I had a chance to visit Austin Labs, and talked to the main network VIOS engineer. I asked explicitly ‘can i used 10Gbit over virtual Ethernet adapters? She kindly explained me, to achieve high speed over virtual Ethernet adapters you must:
- tune multiple network parameters
- have enough CPU in VIOS and LPAR to handle big block of data
- have very specific workload
If you want to know what parameters affects directly the speed, try to find the Alexander Paul’s presentation from the last IBM Enterprise conference (Network Performance Optimization for Virtualized IBM Power Systems with IBM AIX).
I give you three examples, where you can see how the speed change on real environment depending on the configuration. In my example I used the interface for Live Partition Mobility traffic.
For all scenarios I had most of the parameters set according to the best practice recommendations ( such as large send, large receive, a lot of spare CPU cycles which can handle the extra load etc.) Most of these parameters Niggel described here.
The first graph shows the speed of the SEA port which handles the ‘LPM’ VLAN. In this setup I don’t have Jumbo Frames enabled, and MTU is set to default (1500)

As you can see, 266451.5 KB/s is max. Which is about 1,99 Gbit/s (20% of the physical card bandwidth)
Second graph is from the test, where jumbo frames were enabled, but still virtual Ethernet over SEA, and MTU 9000

As you can see, 799962.4 KB/s is max, which is 6.1 Gbit/s (61% of physical card). With enabling Jumbo frames, higher MTU, we tripled the speed.
The last example,no SEA being used. 10Gbit physical adapter is dedicated for LPM traffic only.

As you can see, max is 1153172.5 KB/s which is 8.8 Gbit/s (88% of physical adapter).
Hi Bart, I’ve found your article really helpful although I still have one question left.
If I understand what you’re saying, your first test is using a 10Gbps adapter through VIOS with no tunning whatsoever, right? If that’s so, the bandwidth metrics are way better than the use of pure virtual ethernet adapters with no tunning either with which currently I’m getting around .5Gbps right now.
So it’s safe to infer that the mere use of a 10Gbps adapter brings better bandwidth, right? I know its not the best but its better than not using the adapter at all.
Thanks for your article.
Hi there,
Glad to hear that you’ve found it useful.
I did a lot of tuning in terms of different VIO, SEA parameters. There were some improvement for a bandwidth but not significant. The biggest boost was with enabling Jumbo Frames, but the key message was – no matter what you change there is some limitation and you never achieve higher speed to to the Phyp limitation.
Remember that this post was written about 2 years ago (maybe more), and maybe IBM comes up with some newer firmware which slightly improved the speed, but back then it was technically impossible.
On POWER8 this speed is improved a lot, from my last test it was almost possible to reach 10 Gbps, so big progress there.