3-node cluster - how to setup corosync on full mesh network?

MaLe

Active Member
Jul 26, 2021
66
26
28
57
Hi,

I'm currently in the process of installing a 3-node cluster. This is my network setup for each node:

- 1x 10G NIC -> LAN (Switch)
- 2x 25G NIC -> CEPH (Full Mesh)
- 2x 1G NIC -> Corosync (Full Mesh)

Full mesh cabling of the nodes:

Node1 <-> Node2
Node1 <-> Node3
Node2 <-> Node3

For the 10G LAN I'm using a Linux bridge and for the CEPH network I'm using a OVS bridge, like described in the Proxmox wiki. But I'm currently struggling with the setup of the corosync. The setup should also give redundancy, so if one of the 1G network ports fails, the packets should be routed through one node to the other. There are many suggestions to not use a bond or a bridge, instead to use single network adapters. But I'm currently thinking about also using an OVS bridge like CEPH, because of RSTP, which will be faster in case of changes in topology.

Do you have any suggestions for my setup?

Thanks,
Martin
 
  • Like
Reactions: Johannes S
Hi Martin,
Have you finished this setup?
I also plan to do similar setup, I have 3 servers with 8 1G NICs.
I also plan to have separate Corosync and Storage Full Mesh networks.
 
Yes, we now have this setup in production and it works perfectly and very stably. I decided to use OVS bridges for storage and Corosync. From my research, mesh isn't ideal for Corosync, but the OVS Bridge is still the best option.


This is my network configuration for storage and corosync:

Code:
auto eno1
iface eno1 inet manual
        ovs_type OVSPort
        ovs_bridge vmbr2
        ovs_options other_config:rstp-enable=true other_config:rstp-path-cost=150 other_config:rstp-port-admin-edge=false other_config:rstp-port-auto-edge=false other_config:rstp-port-mcheck=true vlan_mode=native-untagged
#1G CoroSync 1

auto eno2
iface eno2 inet manual
        ovs_type OVSPort
        ovs_bridge vmbr2
        ovs_options other_config:rstp-enable=true other_config:rstp-path-cost=150 other_config:rstp-port-admin-edge=false other_config:rstp-port-auto-edge=false other_config:rstp-port-mcheck=true vlan_mode=native-untagged
#1G CoroSync 2

auto ens23f0np0
iface ens23f0np0 inet manual
        ovs_type OVSPort
        ovs_bridge vmbr1
        ovs_mtu 9000
        ovs_options other_config:rstp-enable=true other_config:rstp-path-cost=150 other_config:rstp-port-admin-edge=false other_config:rstp-port-auto-edge=false other_config:rstp-port-mcheck=true vlan_mode=native-untagged
#25G Nic CEPH 1

auto ens23f1np1
iface ens23f1np1 inet manual
        ovs_type OVSPort
        ovs_bridge vmbr1
        ovs_mtu 9000
        ovs_options other_config:rstp-enable=true other_config:rstp-path-cost=150 other_config:rstp-port-admin-edge=false other_config:rstp-port-auto-edge=false other_config:rstp-port-mcheck=true vlan_mode=native-untagged
#25G Nic CEPH 2

auto vmbr1
iface vmbr1 inet static
        address 172.16.2.1/24
        ovs_type OVSBridge
        ovs_ports ens23f0np0 ens23f1np1
        ovs_mtu 9000
        up ovs-vsctl set Bridge ${IFACE} rstp_enable=true other_config:rstp-priority=32768 other_config:rstp-forward-delay=4 other_config:rstp-max-age=6
        post-up sleep 10
#CEPH Bridge

auto vmbr2
iface vmbr2 inet static
        address 172.16.3.1/24
        ovs_type OVSBridge
        ovs_ports eno1 eno2
        up ovs-vsctl set Bridge ${IFACE} rstp_enable=true other_config:rstp-priority=32768 other_config:rstp-forward-delay=4 other_config:rstp-max-age=6
        post-up sleep 10
#CoroSync Bridge
 
  • Like
Reactions: Johannes S
I remember that I looked into the various options for implementing CoroSync over mesh at the time, including SDN Fabric. After further research, I decided on the OVS option. I remeber being told that mesh wasn't the optimal solution for CoroSync, but that OVS Bridge was the best of all the possibilities. Sorry, but I can't remember the exact reason. As far as I can remember, there is also a thread in this forum with more information on this specific point. For me, the important thing is that it runs smoothly, and it has been for quite some time now.
 
  • Like
Reactions: Johannes S