ceph 19.2.1

  1. E

    Ceph RBD Storage zeigt 'rbd error: listing images failed: (2) No such file or directory (500)' – trotz funktionierendem Ceph und korrektem Keyring

    Hallo zusammen, ich habe ein Problem mit Ceph RBD Storage unter Proxmox VE, das ich bisher nicht lösen konnte. Trotz funktionierendem Cluster und korrekt funktionierender rbd-Befehle in der Shell, kann Proxmox in der Web-GUI unter Storage → VM Disks keine Images anzeigen. Es erscheint der...
  2. D

    CEPH: small cluster with multiple OSDs per one NVMe drive

    Hello community! We have deployed our first small Proxmox cluster along with Ceph and so far we've had great experience with it. We're running traditional VM workload (most VMs are idling and most of the Ceph workload comes from bursts of small files with the exception of few SQL servers that...
  3. F

    Ceph on HPE DL380 Gen10+ not working

    I have a Proxmox 8.4 cluster with two nodes and one qdevice, with Ceph Squid 19.2.1 recently installed and an additional device to maintain quorum for Ceph. Each node has one SATA SSD, so I have two OSDs (osd.18 and osd.19) created, and I have a pool called poolssd with both. Since ceph has been...
  4. P

    Ceph PG stuck unkown / inactive after upgrade?

    I just did the PVE upgrade and upgraded ceph to 19.2.1. I did one node after the other, and in the beginning everything seemed fine. But then, when the last 1 out of 3 nodes was scheduled for reboot, I had an accidental power outage. Had to do a hard restart of the cluster. Everything came back...
  5. D

    [SOLVED] Issue with Ceph 5/3 configuration and corruption

    For context I have a nine node proxmox cluster. We've just added five new nodes which all have 4 x 3TB NVMe drives and I've set them up with a separate crush rule and added a device-class of nvme so they can be used separately from the original 4 nodes which are SSD are now on a new replicated...
  6. Q

    Ceph client still using luminous while on Squid

    Hello, I am trying to put the balancer mode into upmap-read which requires to run: ceph osd set-require-min-compat-client reef I can't do so because I have a single client running on luminous compat version: # ceph features { "mon": [ { "features": "0x3f03cffffffdffff"...