Proxmox & Ceph on multi-node chassis

elmo

Active Member
Apr 25, 2020
18
5
43
Hi all,

I'm currently running a 3-node Proxmox Ceph cluster. Everything is hyper-converged, meaning all nodes act as both storage and compute nodes.
This works really well and I've come to really like and appreciate Ceph (as well as Proxmox!) While 3 nodes are the bare minimum, I'd like to
expand as I feel it's a bit on the low side in terms of redundancy, however, my issue is that I do not have that much rack space to play around with.

Does anyone here run Proxmox+Ceph in a multi-node chassis setup? Eg. 4 nodes (max) in each chassis, sharing the NVMe/SAS/SATA backplane
and power, while each node acts as an independent computer, completely controlling a total of 6 disks/ssd's on this backplane.

I have a few thought on this myself, but would like to receive some input regarding this type of setup. I have 6U's to play with, meaning the maximum
setup would/could include 3 servers, 2U's in height with 4 nodes each, eg. 12 nodes as an absolute maximum. Switching backend is 10Gbe and all networks
will be separated according to best practice of course (Ceph public, private, VM, corosync, etc. etc.) Not all nodes need to be Ceph; I would consider having
computing node as well.

Pros? Cons? Any input and feedback is appreciated.

Regards,
Elmo
 
Does anyone here run Proxmox+Ceph in a multi-node chassis setup?
Yes but only in as compute nodes- not because there's anything inherently wrong with the configuration but rather there are limited disks per node to use as storage nodes.
I have a few thought on this myself, but would like to receive some input regarding this type of setup. I have 6U's to play with, meaning the maximum
setup would/could include 3 servers, 2U's in height with 4 nodes each, eg. 12 nodes as an absolute maximum. Switching backend is 10Gbe and all networks
will be separated according to best practice of course (Ceph public, private, VM, corosync, etc. etc.) Not all nodes need to be Ceph; I would consider having
computing node as well.
thats a lot of possible answers without ever defining a question.

What for?
 
Yes but only in as compute nodes- not because there's anything inherently wrong with the configuration but rather there are limited disks per node to use as storage nodes.

thats a lot of possible answers without ever defining a question.

What for?

Well, the question at the bottom:
Pros? Cons?

However, let's be a bit more specific:
As you pointed out, there would be a limit of 6 disks per node. With a 10Gbe network, what would the recommended
amount of drives be per node, if the drives we use are a mix of SATA SSD's and SAS HDD's?

Also, what is the minimum recommended amount of ceph storage nodes for a production cluster?