Proxmox Ceph different OSD counts per host

markmarkmia

New Member
Feb 5, 2018
23
0
1
48
Dumb newb question I'm sure, but when creating an OSD with pveceph, will it automatically sort out the CRUSH map stuff when using different sized OSDs or a different number of OSDs per host?

Example:

I've got 4 hosts that have 8 2.5" bays, two I use in RAID1 for Proxmox boot, the other 6 I am or would like to use for OSDs.
I've got another 4 hosts that have 6 2.5" bays, so 4 available for OSDs after Proxmox boot volume RAID1.

Do I aim to match that same amount of storage per host? So on the first 4 hosts I have maybe 4 480GB SSDs, and 2 1TB SSDs, and then on the other 4 hosts I have 4 1TB SSDs each?

Can I just fill them all with 480GB drives and Proxmox/Ceph will figure it out and balance it accordingly?

Or do I just limit it to 4 SSDs per host and waste the extra 2 slots on the hosts that have 6 available bays?

I read that Ceph prefers similar/identical hardware in a pool, so my assumption is the latter, but I thought I'd ask in case I can somehow take advantage of the additional capacity.

Thanks!
 
For a even distribution of PGs, identical hardware is the key. But not only the distribution, also the speed/latency improves with it. As different disk sizes will get more or less requests according to the amound of PGs that are located on those OSDs. Ceph allows to fine tune the weights of each OSD to compensate and get a more even distribution. PGs are by default rule replicated on host level, so also the amount of disks on each host can make a difference.

In other words, even hardware distribution, less tuning work, less headache. ;)

You can see the amount of PGs and size and weight of the disks with the following command 'ceph osd df tree'. And last pat not least, don't use a RAID controller for OSDs, these tend to use their own caching and write/read algorithms, that are counter productive for a ceph cluster (eg. OSD starvation; blocked reqeusts).
 
Ok - understood. I think a solution in my case will be to put Proxmox on M.2 SSDs on an PCIE adapter and boot from that, freeing up 6 2.5" drives on all servers in the cluster. I think this is what I will do. Thank you!
 
So, if I want to migrate over to larger drives, what would the best-practice approach to that be? Still attempt to keep the same amount of storage provided per host to remain balance (even if it affects performance a little)? Or is the number of OSDs per host being the same more important? (I would think some the failure domain is the host that the same amount of storage would be the most important, but would like to hear from the experts! :)
 
Replace each OSD, one by one. If you see decreased performance or not, depends also greatly on how much load and data the cluster has.

As more OSDs and hosts you have, as better your performance will get (aside all other bottlenecks).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!