Ceph Cluster different OSD sizes - performance/issues expected?

encore

Well-Known Member
May 4, 2018
108
1
58
36
Hi,

Can I expect degraded performance or other issues if my CEPH cluster contains different sizes, e.g. 50x 3840GB Kingston DC1500M, and 50x7680GB Kingston DC1500M SSDs distributed over 5 nodes?
 
Hi,
no if you configure CrushMap and Weight correctly there is not much of an impact expected.
CEPH Docs are your friend....
 
aren't this values are more or less set by proxmox?
In my example at least, weight is set to the size of the disk and reweight is set to 1 for all osds, see image.936-651-max.png
 
Hi,
you can change this using the CEPH CLI and optimize your placement, so it better fits to different disk sizes.
CEPH Documentation will help you. Proxmox itself does not do much with Ceph except creating a default config with default values.
 
  • Like
Reactions: zeuxprox

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!