[SOLVED] Cluster lacp

Manutec

New Member
Jun 9, 2021
5
1
3
43
Hi:
Is it posible have a cluster with lacp? I have ping between three nodes in a vlan inside bond with lacp but I´m not able to add the nodes to the cluster. If is not posible can I do with balance-rr? Thanks
 
Of course you can, lacp is a lower layer and therefore cannot affect clustering. However, a dedicated network for corosync is encouraged.
What errors are you facing when joining a cluster?
Balance-rr is honestly the last bonding mode I would ever use.
 
Last edited:
I have a dedicated 1G Nic for management and 2 10G Sfps for lacp. I create a bond and bond.xx for the cluster. I create a cluster with link0 vlan.xx and when I add the node2 to the cluster with vlan.xx IP the server hang up after quórum....Ok . Finally I change to balance-rr and make the cluster with management Nics. After that edit corosync.conf with vlan.xx . It's works but I think is not the best way to do that. Thanks
 
Please check out the PVE documentation. While it can be overseen, it is pretty much made clear in 3.3.7. Linux Bond:

Code:
If you intend to run your cluster network on the bonding interfaces, then
you have to use active-passive mode on the bonding interfaces, other modes 
are unsupported.

Based on this in our cluster I've made the following setup per Node:
  • 2x 1G in active-backup (onboard) for management and primary Corosync link / ring
  • 2x 10G in LACP for CEPH, secondary Corosync ring
  • 2x 10G in LACP for VMs while all nodes have 1 VLAN is configured as live migration network
This seems to be working pretty solid and I still get recommended / compliant redundancy on the first corosync ring while not wasting a pricier 10G link for cluster synchronization. The secondary link while not compliant would only be used both 1G management links would die out so I guess that this is "good enough" for us. So far, so good.
 
Ok, so is it correct?
2x 1G active-backup
-vlanXX managment
-vlanXY cluster creation

2x 10G LACP -vlanYY CEPH (public network in ceph configuration?)
2x10 LACP -bridge or whatever for vms outside
 
2x 1G active-backup
-vlanXX managment
-vlanXY cluster creation
In my case I've kept it simple, the 2x 1G is untagged, not VLANs and management is also the cluster network.
 
Solved!!. Works like a charm with this config. Finally y separated in two vlans for more isolation managment and cluster.
 
  • Like
Reactions: kofik
Hello
I wonder what's the reason behind this restriction ?
If you intend to run your cluster network on the bonding interfaces, then you have to use active-passive mode on the bonding interfaces, other modes
are unsupported.
 
Hello
I wonder what's the reason behind this restriction ?
I'd also love to know. I've had a cluster running with 802.3ad LACP across 2 x Cisco Nexus 3064's spanned with VPC in a lab environment for some time now and cannot see any negative effects so far
 
  • Like
Reactions: Pca

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!