BOND LACP 6xNIC

alovillagon

New Member
Jul 26, 2022
2
0
1
Greetings friends, im here for some help with my ProxmoxVe installation.

I had my server DL360E G8, and this had a 4 NIC (1gb each) and i installed another PCI with 2x1GB.

Im trying to get more bandwith with LACP.

so my configuration is here

1658864805850.png

I already installed iperf to check bandwith transfer and here are the results.

1658864851467.png

In the switch side the configuration is this
1658864940758.png

I made a manual test, removing each cable, and the connection persist, so the logical configuration is fine.

and the question is:

Why the bandwith is just 1GB?
How can get more?

Thanks for your help
 
Bonding won't make your network connection faster, it just increases the throughput. You need to differentiate:
Speed/latency: how fast your packets travel between NICs
Throughput: how much data you can send through it

These are two different metrics. Think of it like a road. Lets say you want to drive from A to B over a one-lane road with a 100 km/h speed limit. Adding 5 more lanes to that road won't help you to reach your destination faster. You are still limited to 100 km/h so a single car needs the same time to reach the destination. But on the 6-lane road you can have alot more cars without traffic jams, so more people can reach the destination.
If you want a single car to reach that destination 6 times faster, you need to increase the speed limit from 100 km/h to 600 km/h instead of adding more lanes.
So bonding of NICs is like adding more lanes to an existing road. If you want more speed you need to replace the road completely and build a faster on, so replace your Gbit NIC with for example a 10Gbit NIC.

So with that bond your packets can still only travel with 1Gbit but now 6 different VMs could use 1Gbit each. So you got 6 Gbit throughput but just 1Gbit per guest. But with LACP you can define what a "car" is. You for example might want to switch from layer2+3 to layer3+4 hashing algoithm. With layer3+4 a sinlge guest could use more than 1Gbit as long it is using different ports. So eahc connection is still limited to 1Gbit, but if the application supports it, it can split the traffic accross multiple ports and send them in parallel. In that case a guest could send 6x 1Gbit in parallel and you get your 6 Gbit throughput.
 
Last edited:
So with that bond your packets can still only travel with 1Gbit but now 6 different VMs could use 1Gbit each. So you got 6 Gbit throughput but just 1Gbit per guest.
Thx for your answer, i understand you at all, but a new question is made....

Which options is better in this case?

A: Detach NIC from the BOND and attach directly to the VM.

B: Still having bond.

Thx
 
If you already got the hardware that suppots LACP I would use it. Then you get failover and if you use layer3+4 as hasing algorithm some guests even might be able to use more than 1Gbit throughput in case the application supports splitting and sending the traffic across several ports (SMB for example can do this).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!