Ceph disks of different sizes across nodes

logui

Member
Feb 22, 2024
37
3
8
Each node has a different disk size, 2TB, 1TB and 500GB that I will dedicate to Ceph, what will happens when I build the Ceph filesystem? Thanks
 
As usual: it depends. You did not tell us how many of them you have on how many nodes. Larger systems (5 nodes with multiple OSD each) have no problem with different sizes.

If you have only three nodes with a single OSD each and the usual size=3/min_size=2: you can fill each OSD up to a maximum of ~400 GB (~80% of the smallest OSD). The reason is the size/min_size-definiton: every chunk needs to get stored on each OSD. So the smallest one defines the limit.

And no, it ist NOT an acceptable idea to go below size=3/min_size=2.

Disclaimer: I am not a Ceph specialist...
 
As usual: it depends. You did not tell us how many of them you have on how many nodes. Larger systems (5 nodes with multiple OSD each) have no problem with different sizes.

If you have only three nodes with a single OSD each and the usual size=3/min_size=2: you can fill each OSD up to a maximum of ~400 GB (~80% of the smallest OSD). The reason is the size/min_size-definiton: every chunk needs to get stored on each OSD. So the smallest one defines the limit.

And no, it ist NOT an acceptable idea to go below size=3/min_size=2.

Disclaimer: I am not a Ceph specialist...
Sorry for the lack of details, 3 nodes and 1 OSD per node, with the size already mentioned
 
You can use ~500GiB in total if each node only has one OSD.

As Ceph will store 3 replicas (1 per node) by default. Therefore, if the nodes have overall a different usable disk space for Ceph, the lowest one will be the limit.

This will get more nuanced once you have more than 3 nodes. Or more generally, more nodes than what is configured as "size".
 
Thank you for the information, Ceph was going to be an inefficient solution to me, due to the different disk sizes, therefore a lot of wasted disk space.

I came up with a different solution, I configured each disk with ZFS and I enabled replication across all nodes for the critical VMs, for my home settings its good enough, and I have PBS on top running daily. I think I have covered the basics.
 
  • Like
Reactions: aaron and UdoB

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!