My ceph cluster lost one node and the rest o the cluster does not get the osd UP.
They start, allocate 100% of the node RAM and get killed by the S.O.
we use proxmox 7.2 and ceph octopus
ceph version 15.2.16 (a6b69e817d6c9e6f02d0a7ac3043ba9cdbda1bdf) octopus (stable)
we have 80G on the osd nodes, each node has an 8x8TB HD
Someone have faced the same problem?
I've tried almost everything I can think of, but sofar no luck.
I really appreciate any idea, thanks for your attention!
They start, allocate 100% of the node RAM and get killed by the S.O.
we use proxmox 7.2 and ceph octopus
ceph version 15.2.16 (a6b69e817d6c9e6f02d0a7ac3043ba9cdbda1bdf) octopus (stable)
we have 80G on the osd nodes, each node has an 8x8TB HD
Someone have faced the same problem?
I've tried almost everything I can think of, but sofar no luck.
I really appreciate any idea, thanks for your attention!
Last edited: