Bond setup

aasami

Renowned Member
Mar 9, 2016
55
11
73
24
Hi! I have some troubles setting up bond on BCM57504 via S5224F-ON.
Network configuration:
Code:
auto lo
iface lo inet loopback

auto enp129s0f0np0
iface enp129s0f0np0 inet manual

auto enp129s0f1np1
iface enp129s0f1np1 inet manual

auto enp129s0f2np2
iface enp129s0f2np2 inet manual

auto enp129s0f3np3
iface enp129s0f3np3 inet manual

auto bond0
iface bond0 inet manual
bond-slaves enp129s0f0np0 enp129s0f1np1 enp129s0f2np2 enp129s0f3np3
bond-miimon 100
bond-mode 802.3ad

auto vmbr410
iface vmbr410 inet static
address 10.1.0.211/16
bridge-ports bond0.410
bridge-stp off
bridge-fd 0
#10.1.0.0/16

Bond seems to be established correctly:
Code:
júl 20 13:19:00 sm1 kernel: bnxt_en 0000:81:00.0 enp129s0f0np0: NIC Link is Up, 25000 Mbps full duplex, Flow control: none
júl 20 13:19:00 sm1 kernel: bnxt_en 0000:81:00.0 enp129s0f0np0: FEC autoneg off encoding: None
júl 20 13:19:00 sm1 kernel: bond0: (slave enp129s0f0np0): Enslaving as a backup interface with an up link
júl 20 13:19:00 sm1 kernel: bnxt_en 0000:81:00.1 enp129s0f1np1: NIC Link is Up, 25000 Mbps full duplex, Flow control: none
júl 20 13:19:00 sm1 kernel: bnxt_en 0000:81:00.1 enp129s0f1np1: FEC autoneg off encoding: None
júl 20 13:19:00 sm1 kernel: bond0: (slave enp129s0f1np1): Enslaving as a backup interface with an up link
júl 20 13:19:00 sm1 kernel: bnxt_en 0000:81:00.2 enp129s0f2np2: NIC Link is Up, 25000 Mbps full duplex, Flow control: none
júl 20 13:19:00 sm1 kernel: bnxt_en 0000:81:00.2 enp129s0f2np2: FEC autoneg off encoding: None
júl 20 13:19:00 sm1 kernel: bond0: (slave enp129s0f2np2): Enslaving as a backup interface with an up link
júl 20 13:19:00 sm1 kernel: bnxt_en 0000:81:00.3 enp129s0f3np3: NIC Link is Up, 25000 Mbps full duplex, Flow control: none
júl 20 13:19:00 sm1 kernel: bnxt_en 0000:81:00.3 enp129s0f3np3: FEC autoneg off encoding: None
júl 20 13:19:00 sm1 kernel: bond0: (slave enp129s0f3np3): Enslaving as a backup interface with an up link
júl 20 13:19:00 sm1 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready

But only one port is used:
Code:
[sm1@08:36 ~]✝ iperf3 -s -p 9999
-----------------------------------------------------------
Server listening on 9999 (test #1)
-----------------------------------------------------------
Accepted connection from 10.1.0.212, port 53622
[  5] local 10.1.0.211 port 9999 connected to 10.1.0.212 port 53632
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  2.73 GBytes  23.4 Gbits/sec                 
[  5]   1.00-2.00   sec  2.73 GBytes  23.5 Gbits/sec                 
[  5]   2.00-3.00   sec  2.73 GBytes  23.5 Gbits/sec                 
[  5]   3.00-4.00   sec  2.73 GBytes  23.5 Gbits/sec                 
[  5]   4.00-5.00   sec  2.73 GBytes  23.5 Gbits/sec                 
[  5]   5.00-6.00   sec  2.73 GBytes  23.5 Gbits/sec                 
[  5]   6.00-7.00   sec  2.73 GBytes  23.5 Gbits/sec                 
[  5]   7.00-8.00   sec  2.73 GBytes  23.5 Gbits/sec                 
[  5]   8.00-9.00   sec  2.73 GBytes  23.5 Gbits/sec                 
[  5]   9.00-10.00  sec  2.73 GBytes  23.5 Gbits/sec                 
[  5]  10.00-10.00  sec   636 KBytes  22.7 Gbits/sec                 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  27.3 GBytes  23.5 Gbits/sec                  receiver
-----------------------------------------------------------
Server listening on 9999 (test #2)
-----------------------------------------------------------
^Ciperf3: interrupt - the server has terminated
[sm1@08:39 ~]✝
It might be a switch setup done wrong. But that's my colleague's task.
Any help on setting up the switch or PVE to solve this is appreciated. Thank you very much.
 
I think you would have to test multiple streams to different hosts due to the hashing algo and even then with the 2nd or 3rd stream theres a 50/50 chance it will just send it down the same link.
 
Using 8 parallel streams gives the same speed unfortunately:
Code:
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   239 MBytes  2.01 Gbits/sec    0    721 KBytes      
[  7]   0.00-1.00   sec   248 MBytes  2.08 Gbits/sec    0    721 KBytes      
[  9]   0.00-1.00   sec   224 MBytes  1.88 Gbits/sec    0    638 KBytes      
[ 11]   0.00-1.00   sec   466 MBytes  3.91 Gbits/sec    0   1.19 MBytes      
[ 13]   0.00-1.00   sec   466 MBytes  3.91 Gbits/sec    0   1.19 MBytes      
[ 15]   0.00-1.00   sec   233 MBytes  1.95 Gbits/sec    0    714 KBytes      
[ 17]   0.00-1.00   sec   473 MBytes  3.97 Gbits/sec    0   1.23 MBytes      
[ 19]   0.00-1.00   sec   470 MBytes  3.94 Gbits/sec    0   1.20 MBytes      
[SUM]   0.00-1.00   sec  2.75 GBytes  23.6 Gbits/sec    0

I've tried it from more clients at once with the same result:
Code:
[sm1@09:38 ~]✝ iperf3 -s -1 -D -p4321
[sm1@09:39 ~]✝ iperf3 -s -1 -D -p4322
[sm1@09:39 ~]✝ iperf3 -s -1 -D -p4323

[sm2@09:37 ~]✝ iperf3 -c 10.1.0.211 -p 4321 -i 0 -P 1 -t 30
Connecting to host 10.1.0.211, port 4321
[  5] local 10.1.0.212 port 35044 connected to 10.1.0.211 port 4321
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-30.00  sec  28.8 GBytes  8.23 Gbits/sec    0   3.13 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.00  sec  28.8 GBytes  8.23 Gbits/sec    0             sender
[  5]   0.00-30.00  sec  28.8 GBytes  8.23 Gbits/sec                  receiver

iperf Done.
[sm2@09:40 ~]✝

[sm3@09:22 ~]✝ iperf3 -c 10.1.0.211 -p 4322 -i 0 -P 1 -t 30
Connecting to host 10.1.0.211, port 4322
[  5] local 10.1.0.213 port 40516 connected to 10.1.0.211 port 4322
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-30.00  sec  27.9 GBytes  7.97 Gbits/sec    0   3.12 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.00  sec  27.9 GBytes  7.97 Gbits/sec    0             sender
[  5]   0.00-30.00  sec  27.9 GBytes  7.97 Gbits/sec                  receiver

iperf Done.
[sm3@09:40 ~]✝

[sm4@09:22 ~]✝ iperf3 -c 10.1.0.211 -p 4323 -i 0 -P 1 -t 30
Connecting to host 10.1.0.211, port 4323
[  5] local 10.1.0.214 port 46768 connected to 10.1.0.211 port 4323
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-30.00  sec  28.8 GBytes  8.24 Gbits/sec    0   3.39 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.00  sec  28.8 GBytes  8.24 Gbits/sec    0             sender
[  5]   0.00-30.00  sec  28.8 GBytes  8.24 Gbits/sec                  receiver

iperf Done.
[sm4@09:40 ~]✝
 
Last edited:
Using 8 parallel streams gives the same speed unfortunately:
Code:
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   239 MBytes  2.01 Gbits/sec    0    721 KBytes      
[  7]   0.00-1.00   sec   248 MBytes  2.08 Gbits/sec    0    721 KBytes      
[  9]   0.00-1.00   sec   224 MBytes  1.88 Gbits/sec    0    638 KBytes      
[ 11]   0.00-1.00   sec   466 MBytes  3.91 Gbits/sec    0   1.19 MBytes      
[ 13]   0.00-1.00   sec   466 MBytes  3.91 Gbits/sec    0   1.19 MBytes      
[ 15]   0.00-1.00   sec   233 MBytes  1.95 Gbits/sec    0    714 KBytes      
[ 17]   0.00-1.00   sec   473 MBytes  3.97 Gbits/sec    0   1.23 MBytes      
[ 19]   0.00-1.00   sec   470 MBytes  3.94 Gbits/sec    0   1.20 MBytes      
[SUM]   0.00-1.00   sec  2.75 GBytes  23.6 Gbits/sec    0
No, test streams from multiple hosts - not 1 to 1.
 
I don't think that it's 50/50 chance to use the same link. When one link is fully saturated, it should use another one. Otherwise it wouldn't have any sense to use bond (802.3ad), when there is no gain in performance.
 
I don't think that it's 50/50 chance to use the same link. When one link is fully saturated, it should use another one. Otherwise it wouldn't have any sense to use bond (802.3ad), when there is no gain in performance.
You should also match the switch hashing algo with your host config.
 
use layer3+4 hash policy (on proxmox node for outgoing traffic, and in your physical swtich for ingoing graffic).

This is the only hash policy able to balance multiple connections between 2 ips. (src ip-dstip-srcport-dstport).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!