ceph osd crashes when nobackfill OSD flag is removed

kifeo

Well-Known Member
Oct 28, 2019
113
14
58
Hi, I have this current problem.

I've experienced a disk failure in my ceph cluster with proxmox.
I've replaced the disk, but now with the rebalancing / backfilling, one OSD crashes (osd.1).

When I set the 'nobackfill' flag, the osd does not crash and does crash right after the flag is removed.
The crash from the log is really like https://tracker.ceph.com/issues/56772

I've put the 'complete' log in attachment, here is the last part of the crash :

Code:
ceph version 17.2.4 (b26dd582fcc41389ea06191f19e88eed6eccea5b) quincy (stable)
 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x13140) [0x7ff3bc315140]
 2: gsignal()
 3: abort()
 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x17e) [0x565256aaffca]
 5: /usr/bin/ceph-osd(+0xc2310e) [0x565256ab010e]
 6: (SnapSet::get_clone_bytes(snapid_t) const+0xe3) [0x565256df05f3]
 7: (PrimaryLogPG::add_object_context_to_pg_stat(std::shared_ptr<ObjectContext>, pg_stat_t*)+0x23e) [0x565256c9a94e]
 8: (PrimaryLogPG::recover_backfill(unsigned long, ThreadPool::TPHandle&, bool*)+0x19f3) [0x565256d05543]
 9: (PrimaryLogPG::start_recovery_ops(unsigned long, ThreadPool::TPHandle&, unsigned long*)+0xf2a) [0x565256d0b42a]
 10: (OSD::do_recovery(PG*, unsigned int, unsigned long, ThreadPool::TPHandle&)+0x295) [0x565256b7a175]
 11: (ceph:sd::scheduler:GRecovery::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x19) [0x565256e34879]
 12: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0xad0) [0x565256b9abc0]
 13: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x41a) [0x56525727dc1a]
 14: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x5652572801f0]
 15: /lib/x86_64-linux-gnu/libpthread.so.0(+0x7ea7) [0x7ff3bc309ea7]
 16: clone()

Any help would be very appreciated, as I have 33 down pg, cluster in ERR state.
 

Attachments