# ceph status
cluster:
id: 220b9a53-4556-48e3-a73c-28deff665e45
health: HEALTH_OK
services:
mon: 3 daemons, quorum pve3,pve10,pve14 (age 18h)
mgr: pve10(active, since 9d), standbys: pve3, pve14, sys8
osd: 71 osds: 71 up (since 17h), 71 in (since 17h)
data:
pools: 4 pools, 2560 pgs
objects: 1.58M objects, 5.8 TiB
usage: 12 TiB used, 25 TiB / 38 TiB avail
pgs: 2560 active+clean
io:
client: 694 KiB/s rd, 5.0 MiB/s wr, 57 op/s rd, 622 op/s wr
on node that has the osd's: at pve web page: for each osd pressed stop and out
shutdown the node that had the osd's
remove the osd's
restart the node that had the osd's.
put an osd to another node.
restart the other node.
that did not work.
at pve ceph > osd the osd's still show up as down and out at the original node.
== source node ==
systemctl stop ceph-osd@XX
ceph osd out XX
umount /var/lib/ceph/osd/ceph-XX
ceph osd purge XX --yes-i-really-mean-it
== target node ==
move to new node
# use dmesg to get letter.
ceph-volume lvm zap /dev/sdX
ceph-volume lvm create --data /dev/sdX
was this an osd created with luminous (and 'old' one) ?this is what i did that did not work:
Note that ceph-volume does not have the same hot-plug capability like ceph-disk had, where a newly attached disk is automatically detected via udev events.
You will need to scan the main data partition for each ceph-disk OSD explicitly, if
- the OSD isn’t currently running when the above scan command is run,
- a ceph-disk-based OSD is moved to a new host,
- the host OSD is reinstalled,
- or the /etc/ceph/osd directory is lost.
was this an osd created with luminous (and 'old' one) ?
if yes, did you read the upgrade guide? https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus
it states there :
ceph-volume simple scan
ceph-volume simple activate --all
ceph-volume simple scan /dev/sdk1
please read the upgrade guide again, it says it directly below the text i quoted:like what is the command to run from cli?
For example:
ceph-volume simple scan /dev/sdb1
# ceph-volume simple scan /dev/sdo1
Running command: /sbin/cryptsetup status /dev/sdo1
Running command: /bin/mount -v /dev/sdo1 /tmp/tmpia0XHr
stdout: mount: /dev/sdo1 mounted on /tmp/tmpia0XHr.
Running command: /bin/umount -v /tmp/tmpia0XHr
stderr: umount: /tmp/tmpia0XHr unmounted
--> OSD 30 got scanned and metadata persisted to file: /etc/ceph/osd/30-11930b97-2c45-490c-a722-4e034a0c5433.json
--> To take over management of this scanned OSD, and disable ceph-disk and udev, run:
--> ceph-volume simple activate 30 11930b97-2c45-490c-a722-4e034a0c5433
pve14 ~ # mkdir /var/lib/ceph/osd/ceph-30
pve14 ~ #
pve14 ~ # ceph-volume simple activate 30 11930b97-2c45-490c-a722-4e034a0c5433
Running command: /bin/mount -v /dev/sdo1 /var/lib/ceph/osd/ceph-30
stdout: mount: /dev/sdo1 mounted on /var/lib/ceph/osd/ceph-30.
Running command: /bin/ln -snf /dev/sdb2 /var/lib/ceph/osd/ceph-30/block
Running command: /bin/chown -R ceph:ceph /dev/sdb2
Running command: /bin/systemctl enable ceph-volume@simple-30-11930b97-2c45-490c-a722-4e034a0c5433
stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@simple-30-11930b97-2c45-490c-a722-4e034a0c5433.service → /lib/systemd/system/ceph-volume@.service.
Running command: /bin/ln -sf /dev/null /etc/systemd/system/ceph-disk@.service
--> All ceph-disk systemd units have been disabled to prevent OSDs getting triggered by UDEV events
Running command: /bin/systemctl enable --runtime ceph-osd@30
stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@30.service → /lib/systemd/system/ceph-osd@.service.
Running command: /bin/systemctl start ceph-osd@30
--> Successfully activated OSD 30 with FSID 11930b97-2c45-490c-a722-4e034a0c5433
# ceph -s
cluster:
id: 220b9a53-4556-48e3-a73c-28deff665e45
health: HEALTH_OK
services:
mon: 3 daemons, quorum pve3,pve10,pve14 (age 22h)
mgr: pve10(active, since 9d), standbys: pve3, pve14, sys8
osd: 71 osds: 70 up (since 8m), 70 in (since 8m)
data:
pools: 4 pools, 2560 pgs
objects: 1.58M objects, 5.8 TiB
usage: 13 TiB used, 25 TiB / 37 TiB avail
pgs: 2560 active+clean
io:
client: 18 KiB/s rd, 2.0 MiB/s wr, 0 op/s rd, 150 op/s wr