Replacing disk on Ceph


New Member
Aug 18, 2022
Good day,

I am currently replacing disk in my ceph cluster and I have notice that when I out the OSD Ceph will start rebalancing/recovering. I Understand this part, but when the placement groups are moved off and the OSD is now safe to be destroyed and I destroy them the system starts rebalancing/recovering again see bellow procedure.

ceph osd out osd.<id>
ceph osd safe-to-destroy osd.<id>
systemctl stop ceph-osd@<id>.service
pveceph osd destroy <id>
Yes, the layout changes and as a result the placement of the PGs and objects does as well.
If you don't want to have it change between destroying and adding a new OSD, you can set some global flags (GUI -> Node -> Ceph -> OSDs -> Manage Global Flags): noout, norecover, nobackfill
Remove those after you've finished adding the OSD again.

Always make sure the cluster is healthy before destroying an OSD!
Always make sure to only destroy a single OSD before letting it recover. Otherwise you may experience data loss.


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!