[SOLVED] Which Corosync network interface

bond347

Member
Oct 21, 2022
64
0
6
Hi All,

My server has 2x 1Gb and 2x 10Gb network interfaces. I used 2x 1Gb interface (bond) for proxmox ve mgmt. And i plan to use 2x 10Gb interface (bond with VLANs) shared for VMs and DMZ networks.

1. I read cluster uses Corosync Cluster Engine. Does Corosync network creation is mandatory?
2. If i plan to create a Corosync network, where would i create it, on my 2x 1Gb or 2x 10Gb network interface?

Thanks
 
Corosync itself can handle multiple networks and will switch if one becomes unavailable.
Best practices are to have at least one physical network only for Corosync to avoid issues if other services take up all the available bandwidth. This is especially true if you want to use the HA functionality. It uses Corosync to determine if a node is still part of the cluster.

So ideally, add another 1Gbit NIC and configure Corosync to use the network configured on it, and then the other networks as well, just for additional safety.
 
Corosync itself can handle multiple networks and will switch if one becomes unavailable.
Best practices are to have at least one physical network only for Corosync to avoid issues if other services take up all the available bandwidth. This is especially true if you want to use the HA functionality. It uses Corosync to determine if a node is still part of the cluster.

So ideally, add another 1Gbit NIC and configure Corosync to use the network configured on it, and then the other networks as well, just for additional safety.

Hi Aaron,

Thanks for reply.

Is it ok for me to utilize 10Gb speed by creating a bond (2x 10Gb) virtual interface with VLANs separation:

vlan 10 - 192.160.10.0 VM production
vlan 20 - 192.168.20.0 VM DMZ network
vlan 30 - 192.168.30.0 corosync network

By doing the above interface bonding, let's say if one of the physical NICs is down, the vlan 30 virtual interface will still be functioning because the second interface in the bond will be up.

Thanks
 
Last edited:
Corosync will be sharing the same physical network with other services. That can go well if the other services never use up all the available bandwidth.
Again, we do recommend having at least one physical interface solely for corosync for the best stability, with additional links (Corosync can use up to 8) to give it options to fall back and keep the connection alive.
At least you should configure a second corosync link on the other physical 1Gbit network.

How wil you plan your storage and backup target? These are usually the kind of services that can saturate a network.
 
  • Like
Reactions: bond347
Corosync will be sharing the same physical network with other services. That can go well if the other services never use up all the available bandwidth.
Again, we do recommend having at least one physical interface solely for corosync for the best stability, with additional links (Corosync can use up to 8) to give it options to fall back and keep the connection alive.
At least you should configure a second corosync link on the other physical 1Gbit network.

How wil you plan your storage and backup target? These are usually the kind of services that can saturate a network.

Hi Aaron,

Appreciate your explanation and thought.

Here is my plan for 3x nodes and to fully utilise the bonding feature and NIC redundancy:
1. Create 1st bonding on 2x 1Gb interfaces (Active/Standby). Set Mode Trunk and allow pmx mgmt vlan (the network accessing pmx gui) and corosync vlan.
2. Create 2nd bonding on 2x 10Gb interface (Active/Standby). Set Mode Trunk for VM, DMZ and backup vlans.

By creating bonding for the above item 1 & 2, I will be happy to have NICs redundancies,

Does the above bonding sound ok?
 
Hi Aaron,

Appreciate your explanation and thought.

Here is my plan for 3x nodes and to fully utilise the bonding feature and NIC redundancy:
1. Create 1st bonding on 2x 1Gb interfaces (Active/Standby). Set Mode Trunk and allow pmx mgmt vlan (the network accessing pmx gui) and corosync vlan.
2. Create 2nd bonding on 2x 10Gb interface (Active/Standby). Set Mode Trunk for VM, DMZ and backup vlans.

By creating bonding for the above item 1 & 2, I will be happy to have NICs redundancies,

Does the above bonding sound ok?

Hi Aaron and members,

I found this statement on Proxmox doc "If you intend to run your cluster network on the bonding interfaces, then you have to use active-passive mode on the bonding interfaces, other modes are unsupported".

So, bonding for clusters is supported. I can see Bond mode Active-Backup but where could i find Active-Passive mode?
 
That seems like an old recommendation. I did send a patch for the docs. Corosync on a bond should work, but best practice would be to give it its own dedicated network and then more links as fallback.
 
  • Like
Reactions: bond347
Hi everyone,

Yes, it works fine on a bonded interface. In our production cluster, I'm using an LACP bond of two 10 Gbe interfaces with a VLAN for Corosync and it works perfectly. I posted a while back about this, citing the same area of the documentation:
https://forum.proxmox.com/threads/ha-networking-lacp-and-clustering.72299/

I'm using the 10 Gbe links for "everything", all of the traffic is VLAN'ed, but there's still the remote possibility of overloading the network and causing Corosync issues. The ability to easily configure a separate failover interface for Corosync made me feel a lot better about the configuration.
 
  • Like
Reactions: bond347

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!