Best Practices for multiple NICs

mikebru10

New Member
Aug 30, 2022
1
0
1
I have a server with 4 NICs, 2x 1Gb NICs and 2x 10Gb NICs. I am trying to figure out how I should configure these interfaces. The 10Gb NICs are connected to a 10Gb fully Managed Cisco Nexus 5548 Switch, the 1Gb NICs are connected to Cisco 2960 1Gb interfaces. I have 3 of these servers I plan to configure for HA, I also have a TrueNas appliance with 4x 10Gb NICs and 2x 1Gb NICs. I run iSCSi, NFS, and SMB for storage on my network. I don't have the necessary licenses to run FCoE which is why I'm running iSCSi. I'm coming from ESXi/vCenter environment, I got tired of the licensing costs so decided to move to Proxmox.

Any assistance would be greatly appreciated.
 
Here's what I did with the same number of nics:
1x 1GbE for Corosync alone
2x 10GbE form an lacp bond
Said bond and 1x 1GbE form another active-backup bond which serves as network device for vmbr0.
All vlans come in tagged over vmbr0 and have qos flags set. This way you can prioritize ceph cluster traffic the highest and make it the second Corosync ring.

This way you have the maximum throughput that your hardware allows and also a failover link to the other switch. Ceph over 1GbE is not nice, but it keeps the cluster alive, in case the 10GbE switch should die.
 
  • Like
Reactions: GoZippy and flames

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!