Corosync network planning

Feb 6, 2025
475
193
43
New to Proxmox and working on cluster design. I've seen a few references that corosync should be on its own network/NICs, and have a backup link set.

We have an existing Virtuozzo cluster. We were planning to re-use those servers, but they have only two 10 Gbit NICs (storage network and front/public network). The plan would be to replace those over time. In the meantime though, we were hoping to install Proxmox remotely on these older servers, and adding a NIC would cost a bit of money (before replacement) and more importantly require a trip to the data center. Unless we put the NICs in all at once on the existing live servers, which I'm a bit hesitant to do.

So, "how important" is it to have corosync on its own physical network, as opposed to using the 10 Gbit "front side" network?

For reference we currently have 5 servers, and would be adding one (to get started, then 3+3 during VM migration) and a PBS. Also, the Internet traffic is relatively small in this equation.

Thanks for your input.
 
Thanks for the pointer. I had read multiple threads but not found that one yet. Sounds like we should do it "the hard way" if the rare outcome is a server reboot.
 
New to Proxmox and working on cluster design. I've seen a few references that corosync should be on its own network/NICs, and have a backup link set.

We have an existing Virtuozzo cluster. We were planning to re-use those servers, but they have only two 10 Gbit NICs (storage network and front/public network). The plan would be to replace those over time. In the meantime though, we were hoping to install Proxmox remotely on these older servers, and adding a NIC would cost a bit of money (before replacement) and more importantly require a trip to the data center. Unless we put the NICs in all at once on the existing live servers, which I'm a bit hesitant to do.

So, "how important" is it to have corosync on its own physical network, as opposed to using the 10 Gbit "front side" network?

For reference we currently have 5 servers, and would be adding one (to get started, then 3+3 during VM migration) and a PBS. Also, the Internet traffic is relatively small in this equation.

Thanks for your input.
bond0 with both interfaces and everything on top as a VLAN.

dedicated corosync VLAN/subnet with QoS could do it (have done it in the past). You want to give it priority vs any other traffic (workloads, management, backups, Ceph, NFS, etc)