[SOLVED] Setting up LACP

Mrt12

Well-Known Member
May 19, 2019
150
17
58
45
CH
Hello,

I have a NAS where I configured LACP to increase its network bandwidth. My switch supports LACP and it does work fine.
However the NAS shall also be accessed from a couple Proxmox VMs. To increase the bandwidth for Proxmox as well, I would like to use LACP here, too. However I am unsure whether this is a wise idea. I was researching about how I should configure the LACP and found this thread

https://forum.proxmox.com/threads/management-network-on-lacp.53503/

where someone says that Corosync may not work well with LACP. So I wonder whether this is still the case and I should avoid using LACP with Proxmox or whether it is safe to do so. Unfortunately I cannot manage the switch myself, so I am restricted to LACP.
I should also note that I have set up a cluster with 2 nodes.
 
Last edited:
If you run a cluster, corosync ideally has one physical network link for itself. In a 2 node cluster this can be just one cable directly between the nodes.
Corosync can have up to 8 links configured and will switch by itself if the currently used link becomes unusable. It does this much quicker than any bond configured. That's why it is not advised to run corosync on a network bond, besides it not sharing the network with other services that might congest the network.

If you use HA, then you really want a stable corosync network with low latency that will not be congested by other services. You will also want 3 votes in the cluster to be able to still operate if a node fails. Take a look at the QDevice mechanism to add a 3rd vote without setting up a full 3rd PVE node.
 
Hi Aaron,
thanks for the info. I do not use HA and don't plan to. I mainly use the cluster because it allows me to move VMs around, and administer both nodes in one place.
Also, at the moment I don't have a separate connection for corosync. My servers do both have only 2 ethernet NICs.
 
Then, while not ideal, the worst that can happen if the cluster communication breaks down, is that you won't be able to make certain changes and things like starting a VM. But nothing catastrophic like a node hard resetting itself because it lost connection to the cluster. That only happens if HA is enabled.
 
Hi Aaron
thanks for this clarification. That is indeed not a dramatic restriction.

When configuring the LACP, I assume I need to add a Linux Bond and add my two ethernet ports to it.
Then, I guess I need to remove the two ports from the vmbr0 bridge and instead, add the vmbr0 to the bond, right?
 
Correct, the final hierarchy should be something like this:
2x nic -> bond -> vmbr
 
You can always make a copy of the /etc/network/interfaces file to have a working one on the side :)
 
sure, I did that!
for now, I have LACP not yet enabled, however, I still wanted to try out how I shall configure the bond. I did it like so:

1614679212735.png

as you can see, I have configured the server's IP address and gateway in the vmbr0, and not in the bond. Is that correct? I tried to remove the IP address from the bridge and configure it in the bond, but that did not work.
Also, I set the configuration to "balance-rr" for now - as soon as I have installed both network cables and LACP is activated, I can switch this over to 802.3 I assume.
 
Hi,
so I have tried the setup as shown above yesterday.
The PVE server was connected to a "normal" ethernet port on the switch. Then, LACP was enabled on a other port of the switch, I enabled LACP in the Proxmox control panel and connected the ethernet cable between the 2nd port of my server and the switch. This allowed me to switch to LACP without interruption!

and indeed my setup as shown above works very well. Bandwidth is increased as desired. Thanks!

1614754944027.png
 
  • Like
Reactions: aaron
Good to hear :) I went ahead and marked the thread as solved.