ceph osd out

  1. M

    [SOLVED] On node crash, OSD is down but stays "IN" and all vm's on all nodes keep in error and unusable.

    Hello, I work for multiple clients and one of them wanted us to create a Proxmox cluster to assure them fault tolerance and a good hypervisor that's cost-efficient. It's the first time we put a Proxmox cluster in Production environment for a client. We've only used single node proxmox. Client...
  2. J

    ceph-osd consuming 100% of ram

    My ceph cluster lost one node and the rest o the cluster does not get the osd UP. They start, allocate 100% of the node RAM and get killed by the S.O. we use proxmox 7.2 and ceph octopus ceph version 15.2.16 (a6b69e817d6c9e6f02d0a7ac3043ba9cdbda1bdf) octopus (stable) we have 80G on the osd...
  3. S

    remove crashed disk in ceph

    Hello, My Proxmox version is 6.4-9, Ceph 15.2.13 . I had problem with disk and when I wanted kick him from pool then I get some errors: destroy OSD osd.61 Remove osd.61 from the CRUSH map Remove the osd.61 authentication key. Remove OSD osd.61 --> Zapping...
  4. C

    Add OSD Ceph always out

    Hi, I've got a cluster with 3 nodes. In node 2 I was upgrading 2 OSD of 4; the upgrade of these 2 osd was ok but one osd not updated was down/in during this upgrade. I waited until the rebalance was done and then from GUI I put the OSD out and destroy it. I thought that create it again was...