ceph 19.2.1 osd slow

  1. S

    Experiencing slow OSDs after upgrading Ceph version 18 to 19 in Proxmox v8.4

    Since we upgrade our production Proxmox version from 8.2.x to 8.4.14 and ceph form version 18.2.1 to 19.2.3, we were observing slow OSDs since day 2 of upgrade. We have daily backup of our production VMs from PVE to PBS starting from hour 2100 to ~0400. When its time for Database VM backup(time...
  2. F

    Ceph on HPE DL380 Gen10+ not working

    I have a Proxmox 8.4 cluster with two nodes and one qdevice, with Ceph Squid 19.2.1 recently installed and an additional device to maintain quorum for Ceph. Each node has one SATA SSD, so I have two OSDs (osd.18 and osd.19) created, and I have a pool called poolssd with both. Since ceph has been...
  3. P

    Ceph PG stuck unkown / inactive after upgrade?

    I just did the PVE upgrade and upgraded ceph to 19.2.1. I did one node after the other, and in the beginning everything seemed fine. But then, when the last 1 out of 3 nodes was scheduled for reboot, I had an accidental power outage. Had to do a hard restart of the cluster. Everything came back...
  4. F

    Ceph 19.2.1 2 OSD(s) experiencing slow operations in BlueStore

    Hello, On our clusters 8.4.0 since the upgrade of Ceph 19.2.0 to 19.2.1 (pve2/pve3) I have warning messages. I applied the "solution" at https://github.com/rook/rook/discussions/15403 But this not resolve the "problem". Best regards. Francis