NIC guidance on 4-node cluster

wseda22

New Member
Jul 23, 2025
1
0
1
Hi all,
I am relatively new to Proxmox and am looking to set up a test four-node cluster. Coming from VMware, I am trying to figure out what is the best way to set up the NIC configuration. Any guidance would be greatly appreciated.

Each node has four 1 Gbps ports and two 10 Gbps ports. Per the picture below, my thought is to configure the two 10 Gb ports in a bond (bond0) for the two Ceph networks (public & cluster). The reason for the bond as opposed to dedicated NICs is for redundancy, should I ever lose one of the ports temporarily. Next, I would bond two of the 1 Gb ports (bond1) for the management network and VM networks. The reason why I am not using all four 1 Gb ports is to avoid having so many ports on a switch tied up to the servers.

Is this a good approach or is there a better approach I should be taking? This would be a lab environment at work, primarily used for testing and occasionally for demonstrations/trainings.
Proxmox_NIC_v1.png
 
Hello,
It looks good in general. Here are some additional recommendations and notes:

The cluster (Corosync) relies on a low-latency and stable network. Therefore, it should have its own dedicated network[0]. It doesn't require much bandwidth, 1G is more than sufficient, most important is low latency.

Optimally, corosync should run over own network interfaces, Separate switches are recommended. Since Corosync organizes the redundancy itself, it can be simple dumb switches. The separate networks/interfaces are important because if, for example, network load due to replication, VM data traffic, etc. utilizes the network, the latency of Corosync also increases and this can then lead to node failures.
Please check section [0] cluster requirements for more details and take a look at [1] for more information about fencing.

Additionally, since bond is considered, [2] The following recommendation applies:
If LACP bonds are used for corosync traffic, we strongly recommend setting bond-lacp-rate fast on the Proxmox VE node and the switch! With the default setting bond-lacp-rate slow, this mode is known to be problematic in certain failure scenarios...

You might also consider setting up QDevice for the cluster, since you have an even number of nodes. Please read [3] Corosync External Vote Support, carefully to understand the drawbacks and implications of using an additional external vote in an even-nodes cluster.

[0] https://pve.proxmox.com/pve-docs/chapter-pvecm.html#pvecm_cluster_requirements
[1] https://pve.proxmox.com/pve-docs/chapter-ha-manager.html#ha_manager_fencing
[2] https://pve.proxmox.com/pve-docs/chapter-pvecm.html#pvecm_corosync_over_bonds
[3] https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_corosync_external_vote_support
 
  • Like
Reactions: Johannes S