Nic Bonding Fun


New Member
May 4, 2016
Hey people,

I was testing bonding thingies with 4 port dell broadcom 5719 nics. There is something weird when I tried to bond 4 port nics in round robin mode. I want to share.

Using 2 nics connected directly port to port with patch cables.
3 different server hardware and these kernels.
Debian 9 4.9.0-7-amd64
Promox 4.15.18-8-pve

When I testing with iperf it was working full speed only one direction.

[ 5] local port 50942 connected with port 5001
[ ID] Interval Transfer Bandwidth
[ 5] 0.0- 1.0 sec 445 MBytes 3.73 Gbits/sec
[ 5] 0.0-10.0 sec 4.38 GBytes 3.76 Gbits/sec
[ 4] local port 5001 connected with port 39798
[ 4] 0.0- 1.0 sec 253 MBytes 2.12 Gbits/sec
[ 4] 0.0-10.0 sec 2.51 GBytes 2.15 Gbits/sec

Then Ive tried two machines installed Debian 9 Kernel 4.9.0-7-amd64
Those two worked well both directions around 445 MBytes/s

Then Ive tried to install proxmox on one of those Debians to see if its Proxmox's problem.
After first restart and start using Proxmox Kernel, again slow for one direction.
So its working 445MBytes on both sides when using Debian Kernel but slowing down with Proxmox's

4 port nic bonding kinda crazy but Im wondering anybody experienced something like this and manage to solve it?


The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!