Ceph OSD disk replacement

Paspao

Active Member
Aug 1, 2017
69
2
28
55
Hello,

I am looking for updated documentation on correct procedure for Ceph OSD disk replacement.

Current Proxmox docs only cover OSD creation but is lacking of management procedures using pveceph commands.

I found this Ceph document but some commands are not giving the same output (ie:
service ceph status ) and I am not sure if it is 100% Proxmox version compatible.

In general can we mix procedures using traditional ceph commands with pveceph ?

Thanks to everyone who can spread some light.

P.
 
In general can we mix procedures using traditional ceph commands with pveceph ?
yes. pveceph isnt an actual command binary, its a wrapper for ceph commands.

replacing a disk is super simple, and can be performed even from the gui:
upload_2019-5-23_11-31-42.png
1. down (stop) and out an osd (will probably already be in this state for a failed drive)
2. remove it from the tree and crush map ("destroy" in the gui)
3. replace disk
4. create new osd
5. profit :)
 
I found this Ceph document but some commands are not giving the same output (ie:
service ceph status ) and I am not sure if it is 100% Proxmox version compatible.
We use upstream packages (+ some extra patches) and just provide our management stack around it. In general the ceph tools should work as expected.
 
So glad I found this post. I was working on all the command lines for this. This was so much easier.

A note from my experience. The disk was failed but the mount point still existed on my server. Running 'destroy' from gui did not unmount the drive. After swapping the drive it was going to set the new mount point as sdm. I did a umount sdf, popped the new drive out and back in. Then adding the disk in GUI used the sdf mount point.
 
Last edited:
  • Like
Reactions: tuathan and dmulk
So glad I found this post. I was working on all the command lines for this. This was so much easier.

A note from my experience. The disk was failed but the mount point still existed on my server. Swapping the drive it was going to set the new mount point as sdm. I did a umount sdf, popped the new drive out and back in. Then adding the disk in GUI used the sdf mount point.

Seems like this is a bug with Proxmox's GUI wrapper...? Seems like "Destroy" should also unmount the OSD's mount point, no?
 
Hi,

Is this the same procedure if I want to replace the SSD with a larger SSD than the original?

David
 
Hi David!

Just noticed nobody answered your question.
Yes you can use this to swop to larger drives.
And there is a trick, remapping takes quite a long time, and you can greatly speedup the process by :

$ceph tell 'osd.*' injectargs '--osd-max-backfills 16'

All the best

Ras
 
  • Like
Reactions: Adam Smith
The command is only persistent if you write it into the config. Otherwise you have to set the value again after restarting the specific service.
 
  • Like
Reactions: Adam Smith
The command is only persistent if you write it into the config. Otherwise you have to set the value again after restarting the specific service.
Perfect. Thank you.

Also, I did a little lookup. I had more luck with
Code:
# ceph tell 'osd.*' injectargs --osd-max-backfills 3 --osd-recovery-max 9

as described here.
 
  • Like
Reactions: takeokun

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!