How to best use multiple NICs

speck

New Member
May 8, 2025
14
2
3
Greetings Proxmoxians!

I am on the cusp of building a new cluster with 3 Dell servers. Each server has two dual-port 25Gb/s NICs, for a total of 4 ports.
From a high-level, I'm counting roughly 4 functions that the network needs to perform:
  1. Management access (SSH and web UI) to the hosts themselves.
  2. Intra-cluster syncronization within Proxmox itself (Corosync)
  3. Access to storage; to host the VMs (via NVME/TCP, iSCSI, NFS, CIFS, etc.)
  4. Data to/from guest VMs
All connections will go to a common TOR switch; possibly a pair of switches VLAGed together for redundancy.

Here's some of the options I've thought about:

Option 1:
Team all four ports, and add separate VLANs and bridges (multi-homing) for each necessary function.

Option 2:
Dedicate 1 port for Corosync, 1 port for storage access, team 2 ports to be shared for management and guest data traffic.

Option 3:
Dedicate 1 port for each function; 1 for Corosync, 1 for management, 1 for guest VM traffic, 1 for storage access.

Option 4:
Create two teams, using 1 port from each NIC in each to avoid an outage in the case of a single NIC or cable failure.


If you were in my shoes, how would you go about using these ports for best performance and reliability? Would you consider adding additional port(s) (maybe just a single 1Gb adapter) to dedicate for Corosync?

What else am I overlooking to make my decision?


-Cheers,

speck
 
Last edited:
Hello,

I would advise you to dedicate two ports to Corosync, because redundancy is a critical point in cluster communication. Should you need to modify something in your Corosync and/or your cluster communication, then redundancy will be needed to keep your cluster up while modifying one Link after the other.
Management access can also be set up in the Corosync networks, as long as you don't upload a whole bunch of data through your ssh/scp connection.
This would leave you with one port dedicated to uplink/production, and another one dedicated to storage. Your two 25 GBPS ports seem appropriate for those purposes.
Then also, VM migrations, HA relocations and backups can be done through the backup network, so that your dedicated bandwidth for production use is saved for production purposes.

Maybe this is a setup you would wish to consider.

Kind regards,

GD
 
1x LAG with 4x 25G nics
or
2x LAG with 2x 25G nics
split vlans as needed

+ add 1G adapter
2x 1G - 2x corosync network (better)
or
1x 1G - primary corosync, secondary on the LAG
 
Hello,

I would advise you to dedicate two ports to Corosync, because redundancy is a critical point in cluster communication. Should you need to modify something in your Corosync and/or your cluster communication, then redundancy will be needed to keep your cluster up while modifying one Link after the other.
Management access can also be set up in the Corosync networks, as long as you don't upload a whole bunch of data through your ssh/scp connection.
This would leave you with one port dedicated to uplink/production, and another one dedicated to storage. Your two 25 GBPS ports seem appropriate for those purposes.
Then also, VM migrations, HA relocations and backups can be done through the backup network, so that your dedicated bandwidth for production use is saved for production purposes.

Maybe this is a setup you would wish to consider.

Kind regards,

GD
I would definitely NOT dedicate 2x25gbps ports to corosync, though. If you really need the redundancy of dedicated corosync links, install a dual or quad port gigabit card and use the lower speed ports for that. use separate switches for each link. But 25gbps is too expensive (both money and performance) to dedicate to corosync imo.
 
Corosync can (and arguably should) be set to use all interfaces, or at least all internal ones. Proxmox recommends the primary be a dedicated 1 Gbps to ensure low latency but there can be multiple backup links. As alluded to, bandwidth needs are small.
 
  • Like
Reactions: speck
But 25gbps is too expensive (both money and performance) to dedicate to corosync imo.

This is along my thinking as well; it would pain me to dedicate a pair of 25Gb links for corosync, making them unavailable for production work.

I think I am going to proceed half-way as @czechsys mentioned: to create a 4 link LAG and put all the traffic on there separated by VLANs, for now. We'll see how that works.

If the goal is to remove points of failure and provide a dedicated network for Corosync, I wonder if putting a 4-port NIC in each server, with direct connections to each of its peers, with no switch at all, would be a workable idea. With messy routing tables, it would also be possible for each server to use their peers as fallback routes, if one of the cables was unplugged...
 
Last edited:
I would definitely NOT dedicate 2x25gbps ports to corosync, though. If you really need the redundancy of dedicated corosync links, install a dual or quad port gigabit card and use the lower speed ports for that. use separate switches for each link. But 25gbps is too expensive (both money and performance) to dedicate to corosync imo.
Yes, I agree with this. Sorry if I expressed it otherwise. My intent was to advise 25 GBPS ports both to production and storage, and leave the rest to Corosync. The latter needs low latency, but big bandwidth would be wasted at that place.