Network Configuration Advice for 4 x NICs in Proxmox Cluster (New User)

TheGreenLan93

New Member
May 24, 2025
4
0
1
Hello experts !

I'm new to Proxmox and I'm setting up a 3-node Proxmox VE cluster with HA enabled.
Only 2 nodes will run production workloads, the 3rd node is just used for quorum and cluster stability.
Each production node has:
- 2 x dual-port 10/25G NICs (total 4 ports of 10/25G per node)
- 2 x 1G Base-T ports

I need to design a network layout for these traffic types : VM network, storage, backup...

I'm planning to use VLAN-aware Linux bridges and bonding (LACP where applicable), but I’d appreciate your advice as a beginner.
- Are there best practices for distributing these networks across the available NICs?
 
Last edited:
Thanks for your feedback,

It means I can implement a configuration as such ? :

bond0 - 2 x 1GbE - Corosync (vmbr0.20 - VLAN20)
 
Last edited:
bond0 - 2 x 1GbE - Corosync (vmbr0.20 - VLAN20)
Don't put Corosync on a bond, it does its own failover which is a lot faster than the failover of bonds and putting it on a bonded interface can cause severe issues. Rather use one dedicated, non-bonded, link for corosync and provide another network as Failover (people often times use the storage network).
 
Don't put Corosync on a bond, it does its own failover which is a lot faster than the failover of bonds and putting it on a bonded interface can cause severe issues. Rather use one dedicated, non-bonded, link for corosync and provide another network as Failover (people often times use the storage network).
What if you only have 4 interfaces available? 2 interfaces are already occupied in an LACP bond for VM data. This leaves only 2 interfaces available for management and corosync traffic. In this situation, wouldn't it be better to have corosync and management combined in a bond? This way you can lose 1 interface. Or would you use 1 interface for management, and the other for corosync, which results in no HA?
 
You can have corosync use all the NICs. See the doc page I posted above.
I went over the doc. I will be bundling 2 interfaces in an LACP bond. These can't be used in the corosync network, as there is no IP configured on this bond, correct? See my example /etc/network/interfaces file. The Proxmox management IP is on VLAN55, the additional extra corosync network is in VLAN65. I can both add 192.168.55.10 and 10.0.65.10 to the corosync setup.

Code:
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto eth1
iface eth1 inet manual

auto eth2
iface eth2 inet manual

auto eth3
iface eth3 inet manual

# Management (eth0, VLAN 55)
auto vmbr1
iface vmbr1 inet static
    address 192.168.55.10/24
    gateway 192.168.55.1
    bridge-ports eth0
    bridge-stp off
    bridge-fd 0

# Corosync Ring 2 (eth1, VLAN 65)
auto vmbr2
iface vmbr2 inet static
    address 10.0.65.10/24
    bridge-ports eth1
    bridge-stp off
    bridge-fd 0

# VM Data LACP Bond
auto bond0
iface bond0 inet manual
    bond-slaves eth2 eth3
    bond-miimon 100
    bond-mode 802.3ad
    bond-xmit-hash-policy layer2+3

# VM Bridge
auto vmbr0
iface vmbr0 inet manual
    bridge-ports bond0
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    bridge-vids 2-4094