My ceph cluster lost one node and the rest o the cluster does not get the osd UP.
They start, allocate 100% of the node RAM and get killed by the S.O.
we use proxmox 7.2 and ceph octopus
ceph version 15.2.16 (a6b69e817d6c9e6f02d0a7ac3043ba9cdbda1bdf) octopus (stable)
we have 80G on the osd...
Hello,
My Proxmox version is 6.4-9, Ceph 15.2.13 .
I had problem with disk and when I wanted kick him from pool then I get some errors:
destroy OSD osd.61
Remove osd.61 from the CRUSH map
Remove the osd.61 authentication key.
Remove OSD osd.61
--> Zapping...
Hi,
I've got a cluster with 3 nodes. In node 2 I was upgrading 2 OSD of 4; the upgrade of these 2 osd was ok but one osd not updated was down/in during this upgrade. I waited until the rebalance was done and then from GUI I put the OSD out and destroy it. I thought that create it again was...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.