Using mixed size SSDs with Ceph Luminous

NKarnel

New Member
Sep 4, 2019
8
0
1
32
Hello,

We have a small 3-node cluster with 4x Samsung SM863a 480GB SSDs each, but lately we're looking to expand by using the 960GB Drives.
My question is as follows:

How easy and safe is it to start off by adding a 5th (960GB) disk to each machine, and will the CRUSH map automatically update to reflect the bigger (2x) size of the 5th OSD disk on each node?

Thank you :)
 
We have an 8-node cluster, 5 for ceph in which we have both ssd and hdd class disks. We have recently been upgrading our capacity by replacing 4T disks with 14T disks. The process has been uneventful. We set the target osd to out and stop it. Wait for the cluster to re-balance, then destroy and replace the disk followed by re-creating the osd. We will ultimately replace all 4T's but it has been so well handled by ceph that we are in no hurry. Bare in mind that in my case there is 3 times as much data flowing to the 14T disks as the 4T's during our upgrade but this has not been a problem for us. Also pool rules assure that pools are serviced by appropriate class of device. In the attached screenshot you can see that within each device class, the use percentages are similar regardless of disk size. CRUSH will create the osd with the correct weight.
 

Attachments

  • host1osds.png
    host1osds.png
    27.9 KB · Views: 70
  • Like
Reactions: Alwin
Wow, that's impressive.

Ultimately that's our plan as well, to end up replacing all 480 drives with 960s.

So let me get this straight -- You do not set a no-out flag, you just set it to "out", stop it, destroy, pull the disk out, replace disk, create OSD, rebalance, end.

If I may 1 more question, how's performance while rebalancing?
 
Last edited:
So let me get this straight -- You do not set a no-out flag, you just set it to "out", stop it, destroy, pull the disk out, replace disk, create OSD, rebalance, end.

That's correct. Be sure to let it go healthy for safety between stop and destroy.
If I may 1 more question, how's performance while rebalancing?

Our users don't feel a thing...until I get impatient and increase osd-max-backfills too high. ;)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!