Recommended number of OSDs per node?

yurtesen

Active Member
Nov 14, 2020
38
5
28
Hello,

The wiki says:
https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster
We recommend a Ceph cluster size, starting with 12 OSDs, distributed evenly among your, at least three nodes (4 OSDs on each node).

Can somebody explain why 4 on each node? How is this calculated and is it same for HDD, SSD and NVME?

Does it also mean there should be 4 drives dedicated for the OSDs or can a single drive divided into 4 partitions?
Because CEPH documentation says https://docs.ceph.com/en/latest/start/hardware-recommendations/
Tip
Running multiple OSDs on a single SAS / SATA drive is NOT a good idea. NVMe drives, however, can achieve improved performance by being split into two more more OSDs.

Thanks!
 
Hi,
I don't know about a formula, but it's a good rule of thumb. With more OSDs, the work can be split more evenly, which is nice especially when a drive fails. If you have multiple OSDs on a single NVMe, the performance might be better, but if the NVMe fails, it will have a bigger impact.