LACP Bonding Questions

bensode

Member
Jan 9, 2019
46
3
8
48
Harrisburg, PA
Hello. I'm looking to setup my Proxmox 5.4 cluster with 2 bonded ports for VMs/Management and 2 bonded ports for integrated CEPH storage. In looking over the Network Config docs there is a reference to running the cluster network from the bonded link requiring active-passive mode. Which one of the the defined modes in the doc is active-passive? I read through them all and even word searched the page but the only hit on passive is the mention of active-passive requirement. Did they mean active-backup?

Thanks!
 
Oct 11, 2018
47
8
8
USA
I assume you are referring to section 3.3.6 of the Admin Guide: Network Configuration which contains:
If you intend to run your cluster network on the bonding interfaces, then you have to use active-passive mode on the bonding interfaces, other modes are unsupported.
This is referring to the PVE (corosync) cluster and indeed should be active-backup. There was a good explanation posted recently, but I can't find it at the moment. The CEPH cluster network can be a LACP bond if you desire.
 

bensode

Member
Jan 9, 2019
46
3
8
48
Harrisburg, PA
Is this link also responsible for the migrations live and otherwise? I don't see any breakdowns of where that is positioned or if it can be re-arranged.
 
Oct 11, 2018
47
8
8
USA
For the PVE cluster network, the Cluster Manager page provides more information. It can be separated from the VM bridge/admin/client network (and running dual networks is recommended). A migration network for VMs is discussed at the end of that linked document.

For the CEPH cluster network, it can be separated from the CEPH public network. This Ceph Documentation shows and describes the traffic over each network. If you only have limited 10G interfaces, it is better to have the CEPH public and cluster running over 10G than to have either CEPH public or CEPH cluster running on less than 10G interfaces due to latency.
 

bensode

Member
Jan 9, 2019
46
3
8
48
Harrisburg, PA
Awesome thanks for the light reading over the weekend. We currently run on 10G switches with a single connection for the vmbr0 and another single 10g connection for ceph. I've added to new connections (total of 4) split between two 10G switches and was looking to configure half for ceph other half for client access/management. I guess we'll need another set for corosync.
 
Oct 11, 2018
47
8
8
USA
Your're welcome. If that solves your issue, please mark it from your OP.

bensode said:
I guess we'll need another set for corosync.
That would be best current practice, however it can be a 1G -- Corosync multicast doesn't require anything more.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!