pgs

  1. D

    Ceph: Verhalten beim Ausfall eines Knoten

    Guten Tag, ich habe eine Vefrständnisfrage bzgl. des Verhaltens von Ceph beim Ausfall eines Knotens. Szenario: 3+ Knoten Ceph in einer 3/2-Kopnfiguration Ceph-Storage inkl. CephFS ist zu 75+% gefüllt Bei dem plötzlichen Ausfall eines Knoten beginnt Ceph die PGs neu zu verteilen bzw...
  2. B

    [SOLVED] PGs not being deep scrubbed in time; After replacing disks

    This week we have been balancing storage across our 5 node cluster; Everything is going relatively smoothly but am getting a warning in CEPH: "pgs not being deep-scrubbed in time" This only began happening AFTER we made changes to the disks on one of our nodes; CEPH is still healing properly...
  3. C

    Upgrade to PVE6 and Ceph Natuilus Failed

    Hi All, I'm hoping I can get some assistance here. I have been reading forums and guides to try and resolve this issue to no avail. Last night I upgraded my Proxmox VE to v6 and my Ceph to Nautilus (I followed the upgrade guide on Proxmox's website.) I assume at some point I did something wrong...