I have 3 machines I am trying to cluster into an HA group. This is not a mission critical setup, just a homelab - but, it does run a lot of my personal stuff so minimal downtime for repairs is fine. More so to learn and use HA in somewhat of a production environment.
I'm on the fence of a few different network configurations. I've read in the docs about the suggested use of bonded pairs, across switches, especially for corosync. Then i found the updated 6.x docs about how Corosync can use a 2nd link now for its own redundancy - negating the need for bonding.
So, I have the following for each server:
2x 10 Gbps
2x 1 Gbps
My idea was to setup a bonded pair across the 10G links, across two stacked switches, and use it for Ceph sync and Application access (the Ceph nor Data would ever saturate a single 10G link, much less 20G). The idea is to have redundancy in the event I reboot the switches or the 4 year old pulls a plug (yeah, has happened). Ceph and App networking would be on two different VLANs that I could throttle if need be (with the Ceph cluster air-gapped).
Same concept on the 1G links for Proxmox mgmt, Corosync, and CLRNET downloads over different VLANs (some being air-gapped).
So, I wouldn't need to use Corosync's redundancy setup - and still gain the benefits of redundancy.
Am I on the right track here? I am about to start testing out the VLAN aware tagging of the bridges, or experiment with OpenVirtualSwitch.
I'm on the fence of a few different network configurations. I've read in the docs about the suggested use of bonded pairs, across switches, especially for corosync. Then i found the updated 6.x docs about how Corosync can use a 2nd link now for its own redundancy - negating the need for bonding.
So, I have the following for each server:
2x 10 Gbps
2x 1 Gbps
My idea was to setup a bonded pair across the 10G links, across two stacked switches, and use it for Ceph sync and Application access (the Ceph nor Data would ever saturate a single 10G link, much less 20G). The idea is to have redundancy in the event I reboot the switches or the 4 year old pulls a plug (yeah, has happened). Ceph and App networking would be on two different VLANs that I could throttle if need be (with the Ceph cluster air-gapped).
Same concept on the 1G links for Proxmox mgmt, Corosync, and CLRNET downloads over different VLANs (some being air-gapped).
So, I wouldn't need to use Corosync's redundancy setup - and still gain the benefits of redundancy.
Am I on the right track here? I am about to start testing out the VLAN aware tagging of the bridges, or experiment with OpenVirtualSwitch.
Last edited: