problem replacing osd

KeyzerSuze

New Member
Aug 16, 2024
15
1
3
Hi

had a drive die it was osd.0. do phy replaced the disk.
then did the flags no reco no refi no rebalance
I couldn't out the drive but was able to destroy it
now when i try to create the osd
pveceph osd create /dev/sdd it fails

Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ -
-osd-uuid e0100ab9-9c05-4a33-8d5c-1661829f3193 --setuser ceph --setgroup ceph


this is the command that seems to fail


this is not all of it just the top

stderr: 2025-07-07T07:57:28.112+1000 7a54a9a04940 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decod
e(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
stderr: 2025-07-07T07:57:28.112+1000 7a54a9a04940 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decod
e(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
stderr: 2025-07-07T07:57:28.112+1000 7a54a9a04940 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decod
e(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
stderr: 2025-07-07T07:57:28.112+1000 7a54a9a04940 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
stderr: 2025-07-07T07:57:30.384+1000 7a54a9a04940 -1 bdev(0x631fcb3a7000 /var/lib/ceph/osd/ceph-0//block) read 0x2000~ error: (5) Input/output error
stderr: ./src/os/bluestore/BlueFS.cc: In function 'int64_t BlueFS::_read(FileReader*, uint64_t, size_t, ceph::bufferlist*, char*)' thread 7a54a9a04940 time 2025-07-07T07:57:30.386059+10
00
stderr: ./src/os/bluestore/BlueFS.cc: 2279: FAILED ceph_assert(r == 0)
stderr: ceph version 18.2.7 (4cac8341a72477c60a6f153f3ed344b49870c932) reef (stable)
stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x11e) [0x631f88aa60e1]
stderr: 2: /bin/ceph-osd(+0x63627e) [0x631f88aa627e]
 
no i had not, tried the steps stopped and the destroy with --cleanup -
now it shows up under disks tab but not the osd tab
doesn't show up under
ceph osd df tree

i did try to create it again with
pveceph osd create /dev/sdd

but it said it was in use
 
okay had to stop it destroy with clean up
remove the lvm that was on there pvremove it and then re add it.

seems like there was enough info there to start the ceph process - partition create pve and then vg and them lvm
also did a reboot as well