Corosync network configuration

andreas1o

Active Member
May 23, 2012
4
0
41
Sweden
www.1o.se
Hi I was wondering if this network configuration is supported for corosync ?

The vmbr0.4093 is the corosync network and the vmbr0.60 is the managment network.


auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual

auto bond0
iface bond0 inet manual
slaves eno3 eno4
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2+3

auto vmbr0
iface vmbr0 inet manual
bridge_ports bond0
bridge_fd 0
bridge_stp off
bridge_vlan_aware yes


auto vmbr0.60
iface vmbr0.60 inet static
address 10.230.240.8
netmask 255.255.255.0
gateway 10.230.240.1
bridge_ports bond0
bridge_stp off
bridge_fd 0

auto vmbr0.4093
iface vmbr0.4093 inet static
address 10.230.150.2
netmask 255.255.255.0
bridge_ports bond0
bridge_fd 0
bridge_stp off
 
If possible I would suggest to put the cluster network on an interface of its own (and add a second ring for redundancy) - Most problems with our clusterstack are due to high-latencies in a loaded/shared network, and corosync is very sensitive to that - Check out our reference documentation:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_cluster_network
 
  • Like
Reactions: egy87
bond_xmit_hash_policy layer2+3
any particular reason you need that set explicitly (I thought that it's the default)?

Else the config looks ok on first sight (interfaces->bond->vlan->bridge) - but you need to test it in your environment of course!