1. G

    [SOLVED] Gelöst: Ceph-Pool schrumpft schnell nach Erweiterung mit OSDs (wahrscheinlich morgen Cluster-Ausfall)

    Hallo zusammen, nachdem ich einen SSD-Pool zu meinem bestehenden HDD-Pool hinzugefügt habe schrumpft der HDD-Pool extrem schnell, so dass vermutlich morgen ein Produktionsausfall bevorsteht. ursprüngliche Umgebung: 3-Node-Hyper-Converged-Cluster, (PVE Vers. 6.3-6) mit verteiltem Ceph (Vers...
  2. C

    [SOLVED] Ceph health warning: backfillfull

    Hello, in my cluster consisting of 4 OSD nodes there's a HDD failure. This affects currently 31 disks. Each node has 48 HDDs à 2TB connected. This results in this crushmap: root hdd_strgbox { id -17 # do not change unnecessarily id -19 class hdd # do not change...
  3. ssaman

    [SOLVED] Ceph Health - backfillfull / OSDs marked as out and down

    Hello proxmox community, today we noticed our heath error with the message: HEALTH_ERR 1 backfillfull osd(s); 1 nearfull osd(s); 1 pool(s) backfillfull; Degraded data redundancy: 99961/8029671 objects degraded (1.245%), 19 pgs degraded, 19 pgs undersized; Degraded data redundancy (low space)...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!