ceph 19.2.1 osd slow

  1. F

    Ceph on HPE DL380 Gen10+ not working

    I have a Proxmox 8.4 cluster with two nodes and one qdevice, with Ceph Squid 19.2.1 recently installed and an additional device to maintain quorum for Ceph. Each node has one SATA SSD, so I have two OSDs (osd.18 and osd.19) created, and I have a pool called poolssd with both. Since ceph has been...
  2. P

    Ceph PG stuck unkown / inactive after upgrade?

    I just did the PVE upgrade and upgraded ceph to 19.2.1. I did one node after the other, and in the beginning everything seemed fine. But then, when the last 1 out of 3 nodes was scheduled for reboot, I had an accidental power outage. Had to do a hard restart of the cluster. Everything came back...
  3. F

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Hello, On our clusters 8.4.0 since the upgrade of Ceph 19.2.0 to 19.2.1 (pve2/pve3) I have warning messages. I applied the "solution" at https://github.com/rook/rook/discussions/15403 But this not resolve the "problem". Best regards. Francis