Trunk or multiple nics

akanarya

Member
Dec 18, 2020
14
0
6
50
Hi,

Sorry if there is an answer before.
You know, we may have lots of physical servers for virtualization, and this means that there will be so many roles in them.
Consequently each server may have multiple nics.
I know, this question really depends on what is needed but i am seeking general advices, experiences.

Do you perefer to reduce the cable number by eliminating multiple nics and load them up into trunk port as vlans if bandwith is no issue?
Some of my thoughts:
it shouldnt be wise to share "ceph private network" with others i think.
Latency may be a problem for corosyncs.
May be in switch, vlan bandwidth management can be done but it can be cumbersome.

For example, do you prefer 1x 10Gb trunk port instead of say 6x 1Gb seperate links, except ceph private and corosyncs?
I dont count link redundancy in it.
What is your general considerations?
Thanks
Ali
 
Well, for Corosync you should consider at least one dedicated physical link. You can add more dedicated ones if possible. You can also configure more corosync links on the other networks as a fallback.

For Ceph and larger storage it is good to have dedicated networks for them because they can use up all the bandwidth and would congest the network for other services.

For VM traffic, this is something where a single trunk can work well. But that depends on how much bandwidth your guests will use.

If you want to go all the way, you could also use a dedicated migration network.
 
Thanks Aaron,
Since I use ceph to host VM disks, is it important to use a dedicated migration network?
If i am not wrong, dedicated migration network is not necessary if guest is offline.
But what if guest is online, does it have a big data to migrate which we will prefer a dedicated link for, considering ceph?
 
Since I use ceph to host VM disks, is it important to use a dedicated migration network?
You are right, if you migrate the guests offline and their disks are on a shared storage, there is nothing migrated. The new node just takes ownership of the VMs config.

If you do live migrations though, the guests memory needs to be synced to the new node to a point where the diff is so small, that a switch over to the new node can be made with minimal downtime.

This means, the migration network needs to be faster than the RAM changes in the guest. If the network is slower, the migration will never catch up with the changed RAM that needs to be synced.
 
  • Like
Reactions: akanarya

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!