Bond Performance Issues

daNutz

New Member
Mar 24, 2023
18
0
1
Hi,

Ive set up a bond for a VM im running on a host, the overall VM, Host & Network connectivity is as below:

  1. VM > Bridge (vmbr1)
  2. vmbr1 > bond0 (LACP (802.3ad) L2 Hash)
  3. bond0 > ens1f0 & ens1f1 (2 x 1GB NIC)
  4. ens1f0 & ens1f1 > USW-Pro Ports 7 & 8 Aggregated
  5. USW-Pro > UDM-SE via 10Gb SFP+
  6. UDM-SE > Internet via 2.5 Gb Ethernet
  7. Internet: 1.6 Gigabit

Right now im unable to get speeds beyond 1 Gigabit on LAN or Internet. Ive tried different hash policies in proxmox with no avail. wehat silly mistake have a made?

Host - Proxmox v8

auto lo iface lo inet loopback iface eno1 inet manual iface eno2 inet manual auto ens1f0 iface ens1f0 inet manual auto ens1f1 iface ens1f1 inet manual auto bond0 iface bond0 inet manual bond-slaves ens1f0 ens1f1 bond-miimon 100 bond-mode 802.3ad bond-xmit-hash-policy layer2+3 auto vmbr0 iface vmbr0 inet static address 10.0.8.16/24 gateway 10.0.8.1 bridge-ports eno1 bridge-stp off bridge-fd 0 auto vmbr1 iface vmbr1 inet manual bridge-ports bond0 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094


VM (Ubuntu/Docker) - vmbr1 > Bond0

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 8e:34:c7:86:d1:2a brd ff:ff:ff:ff:ff:ff altname enp0s18 inet 10.0.9.41/24 brd 10.0.9.255 scope global ens18 valid_lft forever preferred_lft forever inet6 fe80::8c34:c7ff:fe86:d12a/64 scope link valid_lft forever preferred_lft forever 3: br-aca697806c2c: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:b2:3d:4b:72 brd ff:ff:ff:ff:ff:ff inet 172.19.0.1/16 brd 172.19.255.255 scope global br-aca697806c2c valid_lft forever preferred_lft forever inet6 fe80::42:b2ff:fe3d:4b72/64 scope link valid_lft forever preferred_lft forever 4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:e1:2e:94:e4 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:e1ff:fe2e:94e4/64 scope link valid_lft forever preferred_lft forever


Iperf3 Tests

Host > UDM-SE iperf3


root@proxmox-02:~# iperf3 -c 10.0.0.1 -p 5201 Connecting to host 10.0.0.1, port 5201 [ 5] local 10.0.8.16 port 47480 connected to 10.0.0.1 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 112 MBytes 941 Mbits/sec 0 1.04 MBytes [ 5] 1.00-2.00 sec 111 MBytes 933 Mbits/sec 0 1.32 MBytes [ 5] 2.00-3.00 sec 112 MBytes 944 Mbits/sec 0 1.32 MBytes [ 5] 3.00-4.00 sec 112 MBytes 944 Mbits/sec 0 1.39 MBytes [ 5] 4.00-5.00 sec 112 MBytes 944 Mbits/sec 0 1.46 MBytes [ 5] 5.00-6.00 sec 111 MBytes 933 Mbits/sec 0 1.59 MBytes [ 5] 6.00-7.00 sec 112 MBytes 944 Mbits/sec 0 1.59 MBytes [ 5] 7.00-8.00 sec 112 MBytes 944 Mbits/sec 0 1.59 MBytes [ 5] 8.00-9.00 sec 112 MBytes 944 Mbits/sec 0 1.59 MBytes [ 5] 9.00-10.00 sec 111 MBytes 933 Mbits/sec 0 1.59 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.09 GBytes 940 Mbits/sec 0 sender [ 5] 0.00-10.05 sec 1.09 GBytes 934 Mbits/sec receiver

VM > UDM-SE

docker:~$ iperf3 -c 10.0.0.1 -b 2G (10.0.0.1 is the UDM-SE) Connecting to host 10.0.0.1, port 5201 [ 5] local 10.0.9.41 port 53086 connected to 10.0.0.1 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 114 MBytes 954 Mbits/sec 0 2.84 MBytes [ 5] 1.00-2.00 sec 112 MBytes 938 Mbits/sec 0 2.84 MBytes [ 5] 2.00-3.00 sec 112 MBytes 940 Mbits/sec 0 2.84 MBytes [ 5] 3.00-4.00 sec 112 MBytes 938 Mbits/sec 0 2.84 MBytes [ 5] 4.00-5.00 sec 112 MBytes 940 Mbits/sec 0 2.84 MBytes [ 5] 5.00-6.00 sec 112 MBytes 938 Mbits/sec 0 2.84 MBytes [ 5] 6.00-7.00 sec 112 MBytes 940 Mbits/sec 0 2.84 MBytes [ 5] 7.00-8.00 sec 112 MBytes 939 Mbits/sec 0 2.84 MBytes [ 5] 8.00-9.00 sec 112 MBytes 939 Mbits/sec 0 2.84 MBytes [ 5] 9.00-10.00 sec 112 MBytes 940 Mbits/sec 0 2.84 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.09 GBytes 941 Mbits/sec 0 sender [ 5] 0.00-10.05 sec 1.09 GBytes 934 Mbits/sec receiver
 
Last edited:
bond-xmit-hash-policy layer2+3
Try this: "bond_xmit_hash_policy layer3+4" (underscores instead of hyphens)

And make sure it is running in L3+4:
root@prox1 ~ # cat /proc/net/bonding/bond0 | grep "Transmit Hash" Transmit Hash Policy: layer3+4 (1)

Make sure your Switch is capable of L3+4 Hashing. Generally, Juniper or Arista should support this.

But also keep in mind that a connection is usually limited by the maximum bandwidth of a link. A LACP doesn't change anything, it just expands the maximum possible usable bandwidth and enables higher availability if it terminates on two devices.
 
Try this: "bond_xmit_hash_policy layer3+4" (underscores instead of hyphens)

And make sure it is running in L3+4:
root@prox1 ~ # cat /proc/net/bonding/bond0 | grep "Transmit Hash" Transmit Hash Policy: layer3+4 (1)

Make sure your Switch is capable of L3+4 Hashing. Generally, Juniper or Arista should support this.

But also keep in mind that a connection is usually limited by the maximum bandwidth of a link. A LACP doesn't change anything, it just expands the maximum possible usable bandwidth and enables higher availability if it terminates on two devices.
ive done what you've said and no change...

Are you saying you cannot aggregate two x 1gb nics to achieve an aggregated speed of 2Gb?
 
Are you saying you cannot aggregate two x 1gb nics to achieve an aggregated speed of 2Gb?
Yes, I also found out, upon further reading and to my disappointment, that link aggregation does not allow higher 'point to point' bandwidth when I first tried it.

What it does allow, is, depending on hashing, allow > 1 Gb/s bandwidth between this system and multiple other systems. And it provides failover, of course. And more complexity ;-)
 
Have you made sure that the host is running LACP (802.3ad) and Layer3+4 in combination with the switch and verified that?

Could you try "iperf3 -c 10.0.0.1 -P 8 -R"?

Are you saying you cannot aggregate two x 1gb nics to achieve an aggregated speed of 2Gb?
You have a total bandwidth of 2x 1 GbE not 2 GbE. However, a single connection is usually not split between two interfaces but runs over one.
If you have several different connections with different destinations and sources, then the host will definitely be able to make full use of the 2x 1 GbE.

If you have the requirement that a single connection must be capable of at least 2 GbE, then you also have to connect 2x 2.5 GbE interfaces, only then can a single connection definitely achieve 2 GbE.
 
Have you made sure that the host is running LACP (802.3ad) and Layer3+4 in combination with the switch and verified that?

Could you try "iperf3 -c 10.0.0.1 -P 8 -R"?


You have a total bandwidth of 2x 1 GbE not 2 GbE. However, a single connection is usually not split between two interfaces but runs over one.
If you have several different connections with different destinations and sources, then the host will definitely be able to make full use of the 2x 1 GbE.

If you have the requirement that a single connection must be capable of at least 2 GbE, then you also have to connect 2x 2.5 GbE interfaces, only then can a single connection definitely achieve 2 GbE.
All i can configure on the USW-Pro is 802.xx, and my requirement is to max out this 1.69 connection...

iperf3 -c 10.0.0.1 -P 8 -R Connecting to host 10.0.0.1, port 5201 Reverse mode, remote host 10.0.0.1 is sending [ 5] local 10.0.9.41 port 40716 connected to 10.0.0.1 port 5201 [ 7] local 10.0.9.41 port 40732 connected to 10.0.0.1 port 5201 [ 9] local 10.0.9.41 port 40740 connected to 10.0.0.1 port 5201 [ 11] local 10.0.9.41 port 40752 connected to 10.0.0.1 port 5201 [ 13] local 10.0.9.41 port 40768 connected to 10.0.0.1 port 5201 [ 15] local 10.0.9.41 port 40776 connected to 10.0.0.1 port 5201 [ 17] local 10.0.9.41 port 40792 connected to 10.0.0.1 port 5201 [ 19] local 10.0.9.41 port 40802 connected to 10.0.0.1 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 15.5 MBytes 130 Mbits/sec [ 7] 0.00-1.00 sec 6.56 MBytes 55.0 Mbits/sec [ 9] 0.00-1.00 sec 8.27 MBytes 69.4 Mbits/sec [ 11] 0.00-1.00 sec 15.3 MBytes 128 Mbits/sec [ 13] 0.00-1.00 sec 5.19 MBytes 43.5 Mbits/sec [ 15] 0.00-1.00 sec 22.0 MBytes 184 Mbits/sec [ 17] 0.00-1.00 sec 21.5 MBytes 180 Mbits/sec [ 19] 0.00-1.00 sec 17.6 MBytes 148 Mbits/sec [SUM] 0.00-1.00 sec 112 MBytes 939 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 1.00-2.00 sec 12.3 MBytes 103 Mbits/sec [ 7] 1.00-2.00 sec 12.0 MBytes 101 Mbits/sec [ 9] 1.00-2.00 sec 1.18 MBytes 9.92 Mbits/sec [ 11] 1.00-2.00 sec 8.35 MBytes 70.0 Mbits/sec [ 13] 1.00-2.00 sec 4.84 MBytes 40.6 Mbits/sec [ 15] 1.00-2.00 sec 11.2 MBytes 93.7 Mbits/sec [ 17] 1.00-2.00 sec 23.6 MBytes 198 Mbits/sec [ 19] 1.00-2.00 sec 38.6 MBytes 324 Mbits/sec [SUM] 1.00-2.00 sec 112 MBytes 940 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 2.00-3.00 sec 16.2 MBytes 136 Mbits/sec [ 7] 2.00-3.00 sec 9.45 MBytes 79.3 Mbits/sec [ 9] 2.00-3.00 sec 4.76 MBytes 39.9 Mbits/sec [ 11] 2.00-3.00 sec 8.68 MBytes 72.8 Mbits/sec [ 13] 2.00-3.00 sec 11.9 MBytes 99.7 Mbits/sec [ 15] 2.00-3.00 sec 19.1 MBytes 160 Mbits/sec [ 17] 2.00-3.00 sec 20.9 MBytes 176 Mbits/sec [ 19] 2.00-3.00 sec 20.8 MBytes 175 Mbits/sec [SUM] 2.00-3.00 sec 112 MBytes 939 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 3.00-4.00 sec 20.3 MBytes 170 Mbits/sec [ 7] 3.00-4.00 sec 17.9 MBytes 150 Mbits/sec [ 9] 3.00-4.00 sec 10.8 MBytes 90.3 Mbits/sec [ 11] 3.00-4.00 sec 17.4 MBytes 146 Mbits/sec [ 13] 3.00-4.00 sec 9.07 MBytes 76.1 Mbits/sec [ 15] 3.00-4.00 sec 12.1 MBytes 102 Mbits/sec [ 17] 3.00-4.00 sec 11.8 MBytes 99.3 Mbits/sec [ 19] 3.00-4.00 sec 12.6 MBytes 105 Mbits/sec [SUM] 3.00-4.00 sec 112 MBytes 939 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 4.00-5.00 sec 26.1 MBytes 219 Mbits/sec [ 7] 4.00-5.00 sec 23.6 MBytes 198 Mbits/sec [ 9] 4.00-5.00 sec 11.7 MBytes 97.9 Mbits/sec [ 11] 4.00-5.00 sec 2.80 MBytes 23.5 Mbits/sec [ 13] 4.00-5.00 sec 5.00 MBytes 41.9 Mbits/sec [ 15] 4.00-5.00 sec 4.73 MBytes 39.6 Mbits/sec [ 17] 4.00-5.00 sec 11.9 MBytes 99.7 Mbits/sec [ 19] 4.00-5.00 sec 26.2 MBytes 219 Mbits/sec [SUM] 4.00-5.00 sec 112 MBytes 939 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 5.00-6.00 sec 27.3 MBytes 229 Mbits/sec [ 7] 5.00-6.00 sec 12.5 MBytes 105 Mbits/sec [ 9] 5.00-6.00 sec 23.2 MBytes 194 Mbits/sec [ 11] 5.00-6.00 sec 7.82 MBytes 65.6 Mbits/sec [ 13] 5.00-6.00 sec 3.16 MBytes 26.5 Mbits/sec [ 15] 5.00-6.00 sec 14.0 MBytes 117 Mbits/sec [ 17] 5.00-6.00 sec 12.8 MBytes 107 Mbits/sec [ 19] 5.00-6.00 sec 11.2 MBytes 93.7 Mbits/sec [SUM] 5.00-6.00 sec 112 MBytes 939 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 6.00-7.00 sec 28.1 MBytes 236 Mbits/sec [ 7] 6.00-7.00 sec 7.69 MBytes 64.5 Mbits/sec [ 9] 6.00-7.00 sec 3.32 MBytes 27.8 Mbits/sec [ 11] 6.00-7.00 sec 11.3 MBytes 94.7 Mbits/sec [ 13] 6.00-7.00 sec 19.0 MBytes 159 Mbits/sec [ 15] 6.00-7.00 sec 6.29 MBytes 52.8 Mbits/sec [ 17] 6.00-7.00 sec 12.8 MBytes 107 Mbits/sec [ 19] 6.00-7.00 sec 23.5 MBytes 197 Mbits/sec [SUM] 6.00-7.00 sec 112 MBytes 939 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 7.00-8.00 sec 18.2 MBytes 152 Mbits/sec [ 7] 7.00-8.00 sec 21.7 MBytes 182 Mbits/sec [ 9] 7.00-8.00 sec 9.03 MBytes 75.7 Mbits/sec [ 11] 7.00-8.00 sec 22.2 MBytes 186 Mbits/sec [ 13] 7.00-8.00 sec 10.1 MBytes 84.9 Mbits/sec [ 15] 7.00-8.00 sec 3.62 MBytes 30.4 Mbits/sec [ 17] 7.00-8.00 sec 21.0 MBytes 176 Mbits/sec [ 19] 7.00-8.00 sec 6.09 MBytes 51.1 Mbits/sec [SUM] 7.00-8.00 sec 112 MBytes 938 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 8.00-9.00 sec 17.8 MBytes 150 Mbits/sec [ 7] 8.00-9.00 sec 1.02 MBytes 8.56 Mbits/sec [ 9] 8.00-9.00 sec 19.6 MBytes 165 Mbits/sec [ 11] 8.00-9.00 sec 15.9 MBytes 133 Mbits/sec [ 13] 8.00-9.00 sec 6.18 MBytes 51.8 Mbits/sec [ 15] 8.00-9.00 sec 11.7 MBytes 97.9 Mbits/sec [ 17] 8.00-9.00 sec 12.4 MBytes 104 Mbits/sec [ 19] 8.00-9.00 sec 27.3 MBytes 229 Mbits/sec [SUM] 8.00-9.00 sec 112 MBytes 939 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 9.00-10.00 sec 4.32 MBytes 36.3 Mbits/sec [ 7] 9.00-10.00 sec 19.5 MBytes 164 Mbits/sec [ 9] 9.00-10.00 sec 14.6 MBytes 123 Mbits/sec [ 11] 9.00-10.00 sec 22.8 MBytes 191 Mbits/sec [ 13] 9.00-10.00 sec 13.8 MBytes 115 Mbits/sec [ 15] 9.00-10.00 sec 10.8 MBytes 91.0 Mbits/sec [ 17] 9.00-10.00 sec 8.53 MBytes 71.6 Mbits/sec [ 19] 9.00-10.00 sec 17.5 MBytes 147 Mbits/sec [SUM] 9.00-10.00 sec 112 MBytes 939 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.03 sec 187 MBytes 156 Mbits/sec 4382 sender [ 5] 0.00-10.00 sec 186 MBytes 156 Mbits/sec receiver [ 7] 0.00-10.03 sec 133 MBytes 111 Mbits/sec 3798 sender [ 7] 0.00-10.00 sec 132 MBytes 111 Mbits/sec receiver [ 9] 0.00-10.03 sec 107 MBytes 89.4 Mbits/sec 2837 sender [ 9] 0.00-10.00 sec 106 MBytes 89.3 Mbits/sec receiver [ 11] 0.00-10.03 sec 133 MBytes 111 Mbits/sec 3500 sender [ 11] 0.00-10.00 sec 133 MBytes 111 Mbits/sec receiver [ 13] 0.00-10.03 sec 88.8 MBytes 74.3 Mbits/sec 2984 sender [ 13] 0.00-10.00 sec 88.2 MBytes 74.0 Mbits/sec receiver [ 15] 0.00-10.03 sec 116 MBytes 97.2 Mbits/sec 3388 sender [ 15] 0.00-10.00 sec 116 MBytes 96.9 Mbits/sec receiver [ 17] 0.00-10.03 sec 158 MBytes 132 Mbits/sec 4268 sender [ 17] 0.00-10.00 sec 157 MBytes 132 Mbits/sec receiver [ 19] 0.00-10.03 sec 202 MBytes 169 Mbits/sec 4121 sender [ 19] 0.00-10.00 sec 201 MBytes 169 Mbits/sec receiver [SUM] 0.00-10.03 sec 1.10 GBytes 941 Mbits/sec 29278 sender [SUM] 0.00-10.00 sec 1.09 GBytes 939 Mbits/sec receiver
 
Last edited:
Unfortunately, you are very economical with information, which makes it really difficult to help. I am missing corresponding excerpts from the shell and configs that show the current status.

What I was able to find through Google, your switch cannot hash on Layer3+4, therefore iperf can no longer achieve this. This is exactly the restriction we meant. So if you want more you have to buy a new switch or go for at least 2.5 GbE per link.
 
Unfortunately, you are very economical with information, which makes it really difficult to help. I am missing corresponding excerpts from the shell and configs that show the current status.

What I was able to find through Google, your switch cannot hash on Layer3+4, therefore iperf can no longer achieve this. This is exactly the restriction we meant. So if you want more you have to buy a new switch or go for at least 2.5 GbE per link.
I’m sorry I didn’t provide information you was expecting, however you could have asked and I would have supplied.

Thanks for advising it’s not possible.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!