I wanted to thank you for this, the process worked great, except for the part where I messed up and didn't have a manager installed on the destination server yet so step 4 errored out (spitting out "ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf") until I realized that might be necessary.Stealing some of the pieces from another thread - just went through this myself, figured I'd share what worked 100% - about 20 drives completed so far, zero issues.
IMPORTANT: this assumes DB/WAL are on a single physical drive. If they aren't, you'll have to consolidate them down first, then move it.
(( ceph-volume lvm migrate --osd-id <ID> --osd-fsid <FSID> --from db wal --target <FULL VG PATH> ))
ORIGIN SERVER:
1. Find the values you need to proceed
**. FSID=cat /var/lib/ceph/osd/ceph-<ID>/fsid
** VG-ID=ls -l /var/lib/ceph/osd/ceph-<ID>/block | cut -f2 -d">" | cut -f3 -d"/"
2. set OSD out <ID>
3. systemctl stop ceph-osd@<ID>.service
4. ceph-volume lvm deactivate <ID> <FSID>
5. Vgchange -a n <VG-ID>
6. vgexport <VG-ID>
remove disk from server -->
--> input disk into other server
DESTINATION SERVER:
1. Pvscan
2. vgimport <VG-ID>
3. vgchange -a y <VG-ID>
4. ceph-volume lvm activate <ID> <FSID>
5. ceph osd in <ID>
6. ceph osd crush set <ID> <WEIGHT/SIZE> host=<NEWHOST>
7. systemctl status ceph-osd@<ID>.service
credit : https://forum.proxmox.com/threads/osd-move-issue.56932/post-263918
I stumbled on this as a simpler possible method after trying all of this; does anyone know if it stopped working with newer versions of Proxmox/Ceph or something? Next time I move an OSD I'm gonna try it to see if no one can confirm either way.