Hello,
I'm annoyed by ceph reporting.
I had 2 nodes crashing and now ceph report that :
Crashed hosts, pve01 and pve06 are still there while ceph is ok.
Is there something I can do for final cleanup and removal of crashed nodes ?

I'm annoyed by ceph reporting.
I had 2 nodes crashing and now ceph report that :
Code:
root@pve03:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 2.01717 root default
-3 0 host pve01
-5 0.39059 host pve02
4 ssd 0.39059 osd.4 up 1.00000 1.00000
-7 0.39059 host pve03
1 ssd 0.39059 osd.1 up 1.00000 1.00000
-9 0.39059 host pve05
3 ssd 0.39059 osd.3 up 1.00000 1.00000
-11 0 host pve06
-13 0.84538 host pve07
0 ssd 0.39059 osd.0 up 1.00000 1.00000
2 ssd 0.45479 osd.2 up 1.00000 1.00000
Crashed hosts, pve01 and pve06 are still there while ceph is ok.
Is there something I can do for final cleanup and removal of crashed nodes ?
