best practices network for the PVE Cluster

tka222

New Member
Dec 11, 2025
3
1
3
hi guys,

i just installed 3 brand new DELL R660 servers with 2 x 25 GbE SFP+ , 2 x 1 GbE adn 2 x 32G FC servers. The srorage array will be a NVMe Dorado 6000V6 on the FC 32G. Multipath conf and LVM thick shared volumes are ok, but i have questions about the netwokring.

What is the best practice ?

create a bond from 2 x 25 GbE for the Manag, VMs, and migration (like vMotion) network ? I want use a classic config, no LACP, just trunk ports like in the VMware.
create a bond from the 2 x 1 GbE for the Cluster network ?

Or is better to create a dedicated :
1 x 25 GbE - Manag, VMs
1 x 25 GbE - Migration
1 x 1 GbE - Cluster

Next step will be migrate about 50 VMs from the VXrail.
Thanks a lot ! :)
 
Hi @tka222 ,

Given that your storage is FC based, the migration (presuming you mean live migration of VM between PVE hosts) does not carry a lot of data.
Your best option is to create an LACP bond that provides a) redundancy for all traffic, most importantly cluster b) allows for some bandwidth aggregation



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: bl1mp
hi guys,

i just installed 3 brand new DELL R660 servers with 2 x 25 GbE SFP+ , 2 x 1 GbE adn 2 x 32G FC servers. The srorage array will be a NVMe Dorado 6000V6 on the FC 32G. Multipath conf and LVM thick shared volumes are ok, but i have questions about the netwokring.

What is the best practice ?

create a bond from 2 x 25 GbE for the Manag, VMs, and migration (like vMotion) network ? I want use a classic config, no LACP, just trunk ports like in the VMware.
create a bond from the 2 x 1 GbE for the Cluster network ?

Or is better to create a dedicated :
1 x 25 GbE - Manag, VMs
1 x 25 GbE - Migration
1 x 1 GbE - Cluster

Next step will be migrate about 50 VMs from the VXrail.
Thanks a lot ! :)

Hi @tka222,

Sorry for my a bit irrelevant of your question, but can you share your multipath conf file since my setup also uses Dorado storage ?

Thank you..
 
I want use a classic config, no LACP, just trunk ports like in the VMware.
Would advise against this, LAG with either LACP or Active/Passive will work just fine. Depending on what switching you got I would pick one and stick to it.

25G+25G in bond0
1G+1G in bond1
FC for FC.

bond1 is simply for cluster traffic, corosync link0 On this bond, but really both bonds, set bond-lacp-rate 1 see the docs here Corosync Over Bonds
On bond0 create VLAN for management, migration and a secondary VLAN for Corosync. Whenether to use a secondary VLAN for corosync, or just use management vlan as secondary corosync link dosn't really matter.

Then also create vmbr0 on bond1 and on this bridge then create the VLANs you require. I would use SDN for this, but can also be done manually.

Best regards from a fellow VXRail refugee =)