Proxmox HA/Ceph cluster - VLANs for VM traffic, Ceph and Corosync - what traffic needs to go between VLANs?

victorhooi

Well-Known Member
Apr 3, 2018
250
20
58
38
I have a 4-node Promox 6.3 cluster, running Ceph for VM block storage. Underlying disks are NVMe disks.

Each node has a single-port Mellanox NIC, with a 100Gbe connection back to a 100Gbe switch.

This switch is then connected to an upstream router via a 10Gbps port. (However, I'm not sure the router itself is really capable of 10Gbps routing).

I have three separate VLANs in Proxmox setup:
  1. VM traffic (vmbr0, going over enp1s0)
  2. Ceph traffic - enp1s0.15
  3. Corosync heartbeat network - enp1s0.19.
All three networks pass over the same physical 100Gbe link.

Screen Shot 2021-01-11 at 3.45.52 pm.png

My question is around inter-VLAN routing, and network contention.

The traffic within each VLAN I assume only goes back to the 100Gbe switch - and all is well. So traffic within say, the Ceph network goes at line-rate, as does say VM traffic.

However, if you need to go between the VLANs - it would bottleneck at our router.

What traffic in this case needs to pass between the three VLANs? And would that bottleneck?

We can move to Inter-VLAN routing on the switches, I just wasn't sure if it was worth it in this case.

What are people's experiences with Promox/Ceph networks, and is Layer 3 switching needed?
 
What traffic in this case needs to pass between the three VLANs? And would that bottleneck?
Taking your 1..3, then none. Besides, I still suggest to use a seprate NIC port for corosync (at least one link), though that's a different topic.
 
I still suggest to use a seprate NIC port for corosync (at least one link), though that's a different topic.

Hello victorhooi,
you should do seperate the traffic as @Alwin said. Especially for corosync it is neccesary to have a low latancy on the connection. When all traffic is running over the same link the latency maybe become too high and you will get problems with that cluster.
The seperated corrosync connection (better two, if one switch fails) do not need high speeds. 1Gbit/s or even 100MBit/s would be enought if there is only the corosync traffic on this interface. This network is only connecting your 4 cluster nodes and must not be routed.

You will have headache with your setup, if corrosync is on a link where other traffic (espezially storrage traffic) is running over.

Concerning performance the seperation of VM traffic (public) and ceph traffic could also be an good idea.
Maybe it is possible to use another 100GBit port on the switch with a breakout cable to have 25GBit on each node for the VM traffic.
 
  • Like
Reactions: Alwin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!