proxmox ve 8.3

  1. L

    [SOLVED] CEPH OSDs Full, Unbalanced PGs, and Rebalancing Issues in Proxmox VE 8

    Scenario I have a Proxmox VE 8 cluster with 6 nodes, using CEPH as distributed storage. The cluster consists of 48 OSDs, distributed across 4 servers with SSDs and 2 with HDDs. Monday night, three OSDs reached 100% capacity and crashed: osd.16 (pve118) osd.23 (pve118) osd.24 (pve119) Logs...