[SOLVED] Ceph Pool Speicherplatz wird weniger / active+remapped+backfill_wait wird nicht fertig

johanng

Member
Jan 9, 2021
12
1
8
30
Hallo liebe Community,

ich habe vor ca. 4 Wochen von iSCSI (auf einem NAS) auf Ceph (RBD) umgestellt. Die Migration lief ohne Probleme.

Mein Cluster:
3 Nodes
pro Node 2x 1TB SATA SSD
als Ceph Network 10GBit/s

Heute habe ich eine 300GB VM Disk auf den Ceph Pool von iSCSI (auf einem NAS) migriert.
Ca. 10 min. später erschien folgendes (Screenshot 1)

Außerdem wird angezeigt, das Daten synchronisiert werden. Allerdings hängt der Fortschritt bei 0% (Screenshot 2).

Die Größe des Pools sinkt kontinuierlich (Screenshot 3).

Was kann ich hier machen?

Viele Grüße
 

Attachments

  • Bildschirmfoto 2021-08-08 um 17.27.56.png
    Bildschirmfoto 2021-08-08 um 17.27.56.png
    25.7 KB · Views: 11
  • Bildschirmfoto 2021-08-08 um 17.29.21.png
    Bildschirmfoto 2021-08-08 um 17.29.21.png
    23.3 KB · Views: 12
  • Bildschirmfoto 2021-08-08 um 17.30.42.png
    Bildschirmfoto 2021-08-08 um 17.30.42.png
    24.4 KB · Views: 12
Nachtrag:
Das Log zeigt folgendes:
Code:
2021-08-08T18:14:12.846863+0200 mgr.srv1 (mgr.203107) 835823 : cluster [DBG] pgmap v836451: 129 pgs: 1 active+remapped+backfilling, 13 active+remapped+backfill_wait, 115 active+clean; 1015 GiB data, 2.8 TiB used, 2.6 TiB / 5.5 TiB avail; 57 MiB/s rd, 360 KiB/s wr, 58 op/s; 44038/783183 objects misplaced (5.623%); 39 MiB/s, 10 objects/s recovering
2021-08-08T18:14:14.847581+0200 mgr.srv1 (mgr.203107) 835824 : cluster [DBG] pgmap v836452: 129 pgs: 1 active+remapped+backfilling, 13 active+remapped+backfill_wait, 115 active+clean; 1015 GiB data, 2.8 TiB used, 2.6 TiB / 5.5 TiB avail; 67 MiB/s rd, 359 KiB/s wr, 68 op/s; 44038/783183 objects misplaced (5.623%); 26 MiB/s, 6 objects/s recovering
2021-08-08T18:14:16.848290+0200 mgr.srv1 (mgr.203107) 835825 : cluster [DBG] pgmap v836453: 129 pgs: 1 active+remapped+backfilling, 13 active+remapped+backfill_wait, 115 active+clean; 1015 GiB data, 2.8 TiB used, 2.6 TiB / 5.5 TiB avail; 54 MiB/s rd, 392 KiB/s wr, 66 op/s; 43998/783183 objects misplaced (5.618%); 38 MiB/s, 10 objects/s recovering
2021-08-08T18:14:18.849052+0200 mgr.srv1 (mgr.203107) 835826 : cluster [DBG] pgmap v836454: 129 pgs: 1 active+remapped+backfilling, 13 active+remapped+backfill_wait, 115 active+clean; 1015 GiB data, 2.8 TiB used, 2.6 TiB / 5.5 TiB avail; 63 MiB/s rd, 445 KiB/s wr, 65 op/s; 43998/783183 objects misplaced (5.618%); 26 MiB/s, 6 objects/s recovering
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!