I don't understand, which before?it would be nice if this just worked as before.
I don't understand, which before?it would be nice if this just worked as before.
OK I followed the steps, and wads not ablt to move the drive.
the 1st error cam after this:
Code:# vgimport ceph-da1b7ac2-64fc-47e0-8c21-3ba9507da14c Volume group "ceph-da1b7ac2-64fc-47e0-8c21-3ba9507da14c" successfully imported
then error here:
Code:# ceph-volume lvm activate --all --> OSD ID 3 FSID fdcc37da-c93e-4161-a4c3-45e82f695292 process is active. Skipping activation --> Activating OSD ID 16 FSID 10cb7d13-893d-45d4-a711-0bd0a76194e6 Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-16 --> Absolute path not found for executable: restorecon --> Ensure $PATH environment variable contains common executable locations Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-16 Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-da1b7ac2-64fc-47e0-8c21-3ba9507da14c/osd-block-10cb7d13-893d-45d4-a711-0bd0a76194e6 --path /var/lib/ceph/osd/ceph-16 --no-mon-config stderr: 2019-08-30 14:22:14.934 7f446e26e140 -1 bluestore(/dev/ceph-da1b7ac2-64fc-47e0-8c21-3ba9507da14c/osd-block-10cb7d13-893d-45d4-a711-0bd0a76194e6) _read_bdev_label failed to open /dev/ceph-da1b7ac2-64fc-47e0-8c21-3ba9507da14c/osd-block-10cb7d13-893d-45d4-a711-0bd0a76194e6: (2) No such file or directory failed to read label for /dev/ceph-da1b7ac2-64fc-47e0-8c21-3ba9507da14c/osd-block-10cb7d13-893d-45d4-a711-0bd0a76194e6: (2) No such file or directory --> RuntimeError: command returned non-zero exit status: 1
At pve > ceph > osd : The osd still shows at the original node as down and out.
root@epyc3251:~# ls -al /dev/ceph-37ac4e46-a222-48bb-88e8-755b2e61f0ca/osd-block-e723bb40-3a13-4139-8906-e89a7ec90dcd
lrwxrwxrwx 1 root root 7 Dec 13 14:19 /dev/ceph-37ac4e46-a222-48bb-88e8-755b2e61f0ca/osd-block-e723bb40-3a13-4139-8906-e89a7ec90dcd -> ../dm-1
root@epyc3251:~# ls -al /dev/dm-1
brw-rw---- 1 root disk 253, 1 Dec 14 16:43 /dev/dm-1
root@epyc3251:~# chown ceph:ceph /dev/dm-1
could you please tell me how to determinde the VG-ID and osd fsid?I forgot to add the export/import steps:
- set OSD out
- stop OSD service
- deactivate LVM (if OSD made with ceph-volume) / unmount OSD partition
- export the VG (
vgexport <VG-ID>
)- remove disk from server
- input disk into other server
- run
pvscan
to see if the disk is seen by LVM- import the VG (
vgimport <VG-ID>
)- then activate the single osd
ceph-volume lvm activate <ID> <osd fsid>
- last but not least,
ceph osd in <ID>
pvscan
ceph-volume lvm activate --all
The missing Step in between these is aI forgot to add the export/import steps:
- set OSD out
- stop OSD service
- deactivate LVM (if OSD made with ceph-volume) / unmount OSD partition
- export the VG (vgexport <VG-ID>)
- remove disk from server
- input disk into other server
- run pvscan to see if the disk is seen by LVM
- import the VG (vgimport <VG-ID>)
vgchange -a y
- then activate the single osd ceph-volume lvm activate <ID> <osd fsid>
- last but not least, ceph osd in <ID>
I tried this today with my homelab. I moved 8 OSDs from one node to a new one. Very nice!I got the solution:
1. Stop / Out the OSD you want to move
2. Physically move OSD to new node
3. In Proxmox on new node > Open a shell and type:
Bash:pvscan ceph-volume lvm activate --all
4. Have some patience, it can take at least 30 seconds - after some time, chech Ceph > OSD (reload) and you should see the OSD's start to appear on the new node!
We use essential cookies to make this site work, and optional cookies to enhance your experience.