pgs

  1. K

    [SOLVED] How to set Ceph - # of PGs? - Keeps falling back to 32.

    Hi, I'm not sure how, but my Ceph was set to (# of PGs: 32). I found this while investigating slow disk speed on VMs. From the docs: I've changed my PGs in PVE GUI to match the docs 128 PGs, Ceph starts to rebalance, and the PGs start to fall again. It's back down to 32 now: How do I get...
  2. D

    Ceph: Verhalten beim Ausfall eines Knoten

    Guten Tag, ich habe eine Vefrständnisfrage bzgl. des Verhaltens von Ceph beim Ausfall eines Knotens. Szenario: 3+ Knoten Ceph in einer 3/2-Kopnfiguration Ceph-Storage inkl. CephFS ist zu 75+% gefüllt Bei dem plötzlichen Ausfall eines Knoten beginnt Ceph die PGs neu zu verteilen bzw...
  3. B

    [SOLVED] PGs not being deep scrubbed in time; After replacing disks

    This week we have been balancing storage across our 5 node cluster; Everything is going relatively smoothly but am getting a warning in CEPH: "pgs not being deep-scrubbed in time" This only began happening AFTER we made changes to the disks on one of our nodes; CEPH is still healing properly...
  4. C

    Upgrade to PVE6 and Ceph Natuilus Failed

    Hi All, I'm hoping I can get some assistance here. I have been reading forums and guides to try and resolve this issue to no avail. Last night I upgraded my Proxmox VE to v6 and my Ceph to Nautilus (I followed the upgrade guide on Proxmox's website.) I assume at some point I did something wrong...