Cluster meshed Network recommendation

Jan 13, 2021
14
0
6
Hi,

I'm planning on setting up a 3-node cluster. Per node I have two 40GBe ports available and 8 10GBe ports. It is not expected that the cluster will grow, therefore I'm planning a meshed network setup:

2x40GBe meshed for Ceph storage
2x10Gbe meshed for Ceph client
2x10Gbe meshed for Cluster network (corosync ring0)
1x10Gbe for public host connectivity
1x10GBe for public VM connectivity (I prefer to separate the host from the VMs)

This leaves me with 2x10GBe. I could use them as a meshed network or put them on a single 10GB switch (SPOF)
I need to figure out which connection to use for (live) migration traffic and private inter-VM communication.

My questions are:
1. How do I best use the two remaining 2GBe ports? Dedicated network for private inter-VM communication? Use the switch with separate VLANs for inter-vm and live-migration? Put live-migration and inter-VM on the Ceph client network and don't use the ports at all? Other ideas?
2. Which connections should I then use for corosync ring1, ring2?

kind regards

converged
 
Do you have two switches? Then I would build two bonds for public connectivity with two separate switches. Live migration can be done over Ceph cluster network as well. And inter-VM I would put in VLANs in the public connectivity.
Further, I would put both meshed networks as Corosync rings, starting with the cluster network. If that is down you have other problems, anyway. :)
 
Unfortunately I have only one switch (can't change that), therefore I was thinking about another meshed net for inter-vm communication, so that the cluster as a whole would be independent of the switch, even if public connectivity went down.
 
Do you have two switches? Then I would build two bonds for public connectivity with two separate switches.

Hi Ph0x,

How do you do this? Do you have other alternative then MCLAG?

Thank you,
Rares
 
Unfortunately I have only one switch (can't change that), therefore I was thinking about another meshed net for inter-vm communication, so that the cluster as a whole would be independent of the switch, even if public connectivity went down.
I'm not sure if a meshed network is suitable for inter-VM communication since they will expect a single network. But maybe you can solve this with static routing.

How do you do this? Do you have other alternative then MCLAG?
With two switches you would go for an active-backup bond.
 
Hi,

I would recommend a "dedicated" network for live migration, because while live migrating VMs traffic goes from RAM to RAM and can easily saturate a 10 GBit/s connection - and can disturb other traffic on the same connection.

Greets
Stephan
 
So you would prefer a dedicated 10 GbE (which can easily be saturated, as you say) over a shared 40 GbE connection? Don't know if this is very sensible.
 
So you would prefer a dedicated 10 GbE (which can easily be saturated, as you say) over a shared 40 GbE connection? Don't know if this is very sensible.
If you are sure that live migration will not saturate 40 GB/s, than go with it. When starting with Proxmox clustering we ran into issues when our 10 GBits/s shared connection was saturated by live migrations so that other critical stuff got high latency.
 
Last edited:
My current understanding is:
40GBe mesh for Ceph storage
10GBe mesh for Ceph client
10GBe mesh cluster network
10GBe mesh dedicated live migration (and corosync ring1)
10GBe to switch for public proxmox host connectivity
10GBe to switch for public and private VM connectivity (in separate VLANs)

Would that be a reasonable approach?
 
Hi,

regarding my last post, I think my wording was wrong. In my current understanding there are the following types of traffic in a hyper-converged cluster:
1. Cluster communication (this is the IPs that the node-hostnames resolve to)
2. Corosync (ring0 must be dedicated)
3. Ceph public
4. Ceph private
5. Live-migration
6. Public host traffic
7. Public VM traffic
8. Inter-VM communication

As stated above 6. will get a dedicated switch port and 7./8. will get one port separated into VLANs.
The 40GB mesh will be used for ceph private (3.)
One 10GB mesh will be used for corosync (2.)

Leaves two 10GB meshes and traffic 1, 3 and 5.
This leaves me with some options:
1. I could put ceph private and public (3,4) on the 40GB and have a dedicated link for cluster communication (1) and Live-migration(5)
2. I could have ceph public(3) and cluster communication (1) on a dedicated link and one for live-migration (5)
3. I could run the live-migration over the 40GB with ceph private(4) and have dedicated 10GB links for ceph public(3) and cluster communication (1)
4. ???

What would be the best approach here?


regards

converged
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!