proxmox cluster and ceph network redundancy with 3 nodes

benoitc

Member
Dec 21, 2019
173
8
23
This is a followup of another post but I simplify the problem, removing iscsi from the balance for now. What I am trying to understand is if this setup could work for CEPH and what to achieve network redundancy...

I have 3 nodes with 2 nic each (2x10GbE) . Each port is connected to a distinct 10G switch (all ports are 10G) following the current schema:

Scan 21 Dec 2020 at 11.10.png

For now each switch is connected to the router and are not interconnected. Each nodes have the following properties
So just to check again but would such configuration above is enough for CEPH even if it's not the fastest. Or better to try to achieve HA using a spare NAS or just replicate the vm when needed?

Right now the proxmox cluster handled by the switch 1 and CEPH cluster on the switch2. But then I have no redundancy: if one switch crash i am losing either the ceph cluster or the proxmox cluster. This what I am trying to setup.

At first I wanted to setup a bond with active-backup mode. Actually 2 bonds 1 on the vlan handling ceph with a priority on switch 2 and another for the proxmox cluster with a priority on the proxmox cluster. But it's not supported by the UI. Maybe manually? Is this supported by the kernel?

A second option would be setup a mesh network wich would require to replace the NVME PCIe extension by a 2x25G cards . Then I guess I will need to add another sata drive for the system, maybe one that can use these 2 NVMe disks?

What could be the other option? I am quite a newbie in networking, would a MSTP setup work? Is there any configuration I could look for it?

Any hint/ feedback is welcome :)
 
Last edited:
So just to check again but would such configuration above is enough for CEPH even if it's not the fastest.
Depends on the workload, but in principle it works. The only downside you won't be able to achieve reliable HA. Since that requires extra NIC ports.
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_ha_manager

At first I wanted to setup a bond with active-backup mode. Actually 2 bonds 1 on the vlan handling ceph with a priority on switch 2 and another for the proxmox cluster with a priority on the proxmox cluster. But it's not supported by the UI. Maybe manually? Is this supported by the kernel?
Well, manual setup can be done. But it still has the same issue with HA as above.

A second option would be setup a mesh network wich would require to replace the NVME PCIe extension by a 2x25G cards . Then I guess I will need to add another sata drive for the system, maybe one that can use these 2 NVMe disks?
That certainly increases performance. But same HA issue. ;)

What could be the other option? I am quite a newbie in networking, would a MSTP setup work? Is there any configuration I could look for it?
More configuration to take care of. And still doesn't get you around the HA issue. One approach might be to add a quad port NIC and get corosync to run on a separate NIC port. Then you can utilize the other ports for VM/backup/migration and keep the 10 GbE for Ceph.
 
I see.... I should have focus more on the ha part when i bought the hardware... would a 4 x1G be enough?

I wonder if using the 2nvme disks inside a sata enclosure works well but that would allow me to reuse them...
 
I see.... I should have focus more on the ha part when i bought the hardware... would a 4 x1G be enough?
The bandwidth isn't so important for corosync, 1 GbE is enough. The factor is latency, corosync needs low and stable latency to provide accurate information about the state of the cluster.

I wonder if using the 2nvme disks inside a sata enclosure works well but that would allow me to reuse them...
Well, adapters are such a case. :cool: But it will quite definitely reduce the performance. Why not utilize the OCuLink?
 
The bandwidth isn't so important for corosync, 1 GbE is enough. The factor is latency, corosync needs low and stable latency to provide accurate information about the state of the cluster.
Rather than buying another card i am thinking i can reuse the ipmi port from supermicro (sideband) . Did someone try such thing?
 
This won't work since the IPMI port is connected to the management subsystem and cannot be used for the network stack of the OS.
The port does not show up in your ethernet ports, does it?
 
  • Like
Reactions: benoitc
This won't work since the IPMI port is connected to the management subsystem and cannot be used for the network stack of the OS.
The port does not show up in your ethernet ports, does it?
true ... i misread the doc. so i guess my only choices are to use a dedicaced card or play with vlan+ qos.
 
So I will go for an active-backup bond strategy + vlan as it sonds easier to go. The only thing I am asking myself is if I am adding an ethernet port using an USB adapter to put the main cororsync ring in. Wonder if it's something that could resist ofver the time compared to "just" using a vlan on the active interface with QOS for it. Has anyone had experience with it?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!