Homelab: Ceph requirements

startoff

Active Member
Apr 11, 2020
2
0
41
49
Hi all
I searched the forum and Googled around but couldn't find an answer to my question...

I planning to setup a 3 node Proxmox cluster with Ceph (HCI) to play around in my homelab.
The box where i'd like to install has only a single 1Gb NIC. Is this supported?

Thx.
 
Hi,

The box where i'd like to install has only a single 1Gb NIC. Is this supported?


It's a grey area, normally we do not really support having the corosync cluster traffic on the same link as IO traffic like ceph or backup, as this starves the corosync traffic which, while not using much bandwidth, has realtime requirements to packet latency.
It may work, but if clustering is brittle then this is the reason for almost 100%, just FYI.

Also, with just 1G NICs you may not get a really good performance with ceph either, albeit it depends on the targeted OSD count and their read/write rates, though.

But, three node clusters have a nice and relative cheap workaround to resolve this bottleneck.
A additional full-mesh network which needs no extra switch, and can be used for ceph private, VM migration network, to free up the LAN/WAN facing NICs for VM traffic and corosync one and have full bandwidth available for ceph.

Depending on your budged and setup capabilities I'd either buy:
* three dual 10G NICs, those will get ceph to a good level but are naturally a bit more expensive
* three dual 1G NICs, those will at least save you from cluster instability and give ceph a slight boost, this would be the "on thight budget option"; I'd really recommend the 10G if anyhow possible.

See also: https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
 
  • Like
Reactions: kwinz
Thank you for the hints! I was afraid of that...

Because the nodes will be placed in the normal living area I want to buy fanless ones (probably cirrus7).
And they are a bit limited regarding connection options. I think I have to go with two additional USB 1Gb adapters and separte VM/ceph traffic.
In this case I'll have 3x 1Gb adapter per node. Not the fastest setup... I know. But at the end I'll have a maximum of 6 to 7 VMs running on all three
nodes without a high data change rate.
I hope that will be sufficient.

Regards
Roland
 
But, three node clusters have a nice and relative cheap workaround to resolve this bottleneck.
A additional full-mesh network which needs no extra switch, and can be used for ceph private, VM migration network, to free up the LAN/WAN facing NICs for VM traffic and corosync one and have full bandwidth available for ceph.

I would like to equip my servers with Dual 10G NICs:
1 NIC for ceph replication
and 1 NIC for client communication and cluster sync


I understand having a separate network for Ceph replication and redundancy but 3 separate networks just to keep latency low is not really modern "converged". My switches support 802.1p. Can I please use 802.1p priority for the corosync traffic? How would I do that? Thanks in advance!
 
Last edited:
I understand having a separate network for Ceph replication and redundancy but 3 separate networks just to keep latency low is not really modern "converged". My switches support 802.1p. Can I please use 802.1p priority for the corosync traffic? How would I do that? Thanks in advance!
That's to argue. But you can configure your cluster any way you like. It's just, that in our experience the best solution is to have a separate corosync network on its own physical NIC port(s).