Proxmox & Ceph on multi-node chassis

elmo

Member
Apr 25, 2020
18
5
23
Hi all,

I'm currently running a 3-node Proxmox Ceph cluster. Everything is hyper-converged, meaning all nodes act as both storage and compute nodes.
This works really well and I've come to really like and appreciate Ceph (as well as Proxmox!) While 3 nodes are the bare minimum, I'd like to
expand as I feel it's a bit on the low side in terms of redundancy, however, my issue is that I do not have that much rack space to play around with.

Does anyone here run Proxmox+Ceph in a multi-node chassis setup? Eg. 4 nodes (max) in each chassis, sharing the NVMe/SAS/SATA backplane
and power, while each node acts as an independent computer, completely controlling a total of 6 disks/ssd's on this backplane.

I have a few thought on this myself, but would like to receive some input regarding this type of setup. I have 6U's to play with, meaning the maximum
setup would/could include 3 servers, 2U's in height with 4 nodes each, eg. 12 nodes as an absolute maximum. Switching backend is 10Gbe and all networks
will be separated according to best practice of course (Ceph public, private, VM, corosync, etc. etc.) Not all nodes need to be Ceph; I would consider having
computing node as well.

Pros? Cons? Any input and feedback is appreciated.

Regards,
Elmo
 
Does anyone here run Proxmox+Ceph in a multi-node chassis setup?
Yes but only in as compute nodes- not because there's anything inherently wrong with the configuration but rather there are limited disks per node to use as storage nodes.
I have a few thought on this myself, but would like to receive some input regarding this type of setup. I have 6U's to play with, meaning the maximum
setup would/could include 3 servers, 2U's in height with 4 nodes each, eg. 12 nodes as an absolute maximum. Switching backend is 10Gbe and all networks
will be separated according to best practice of course (Ceph public, private, VM, corosync, etc. etc.) Not all nodes need to be Ceph; I would consider having
computing node as well.
thats a lot of possible answers without ever defining a question.

What for?
 
Yes but only in as compute nodes- not because there's anything inherently wrong with the configuration but rather there are limited disks per node to use as storage nodes.

thats a lot of possible answers without ever defining a question.

What for?

Well, the question at the bottom:
Pros? Cons?

However, let's be a bit more specific:
As you pointed out, there would be a limit of 6 disks per node. With a 10Gbe network, what would the recommended
amount of drives be per node, if the drives we use are a mix of SATA SSD's and SAS HDD's?

Also, what is the minimum recommended amount of ceph storage nodes for a production cluster?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!