Cluster with dual 1/10Gb network, interface priority wrong

codyrocco

New Member
Oct 18, 2025
2
0
1
I have two nodes with similar config, both using a 1 Gb onboard and a 10Gb PCIE card interfaces. In nodes, a bridge is using the 10 Gb interface.

node 1:
default via 192.168.75.1 dev vmbr0 proto kernel onlink
192.168.75.0/24 dev enp2s0f0 proto kernel scope link src 192.168.75.100
192.168.75.0/24 dev vmbr0 proto kernel scope link src 192.168.75.112

node 2:
default via 192.168.75.1 dev vmbr0 proto kernel onlink
192.168.75.0/24 dev eno1 proto kernel scope link src 192.168.75.200
192.168.75.0/24 dev vmbr0 proto kernel scope link src 192.168.75.121

Initially i set up the cluster having both interfaces active, giving 10Gb a higher priority - just to find that migration transfer speed is gigabit...
Deleted configs, started over, using just one interface for clustering - again, gigabit speeds.
Disabled gigabit interfaces, removed physical link ("...and stay down!"), reconfigured network and cluster. Voila, 10-gigabit speeds for migration!
Activated gigabit interfaces...back to misery!

Where I am wrong? Maybe one of you could tell me about the obvious mistake I'm unable to clearly see.

(main switch is 10Gb, all LAN traffic passing through it)
 
Last edited:
Use a different subnet and bridge like 192.168.76.0/24 for the 10G nic's and in the datacenter options change the migration network to this network.
 
  • Like
Reactions: UdoB
isn't possible to have them in the same subnet, without gateway specified for 1gb interfaces?
due of a lot of experimentation with passing through the second 10gb port [both nodes have dual 10Gb ports], management access interface is better to be separate (for example, today i found that using pcie splitting will not give also different iommu groups - and that required a quick visit on premise for correcting actions)