ceph active+remapped+backfill_toofull recovery problem

hbustos

New Member
Sep 13, 2024
3
0
1
Hello Guys, I Have a recently installed Proxmox cluster whit 3 server, 10G NIC networking true a Aruba Swithc, 11 OSD SSD (8 x 1.7T, 3 x 3.4T), Total RAW capacity off 24T, 12T Used in 6 VM. One Pool

The problem are this 9 pg, that are active+remapped+backfill_toofull

Low space hindering backfill (add storage if this doesn't resolve itself): 9 pgs backfill_toofull
pg 2.12 is active+remapped+backfill_toofull, acting [9,1,6]
pg 2.2c is active+remapped+backfill_toofull, acting [9,7,2]
pg 2.36 is active+remapped+backfill_toofull, acting [10,1,8]
pg 2.39 is active+remapped+backfill_toofull, acting [9,6,2]
pg 2.3f is active+remapped+backfill_toofull, acting [11,1,6]
pg 2.5a is active+remapped+backfill_toofull, acting [11,6,2]
pg 2.6e is active+remapped+backfill_toofull, acting [11,1,8]
pg 2.71 is active+remapped+backfill_toofull, acting [11,8,2]
pg 2.74 is active+remapped+backfill_toofull, acting [9,2,7]

1726197369781.png

This is the output for, ceph df
root@PVE12:~# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 24 TiB 12 TiB 12 TiB 12 TiB 50.03
TOTAL 24 TiB 12 TiB 12 TiB 12 TiB 50.03

--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.mgr 1 1 177 MiB 28 532 MiB 0.03 516 GiB
SSD1 2 128 4.3 TiB 1.19M 12 TiB 88.95 516 GiB
.rgw.root 3 32 1.3 KiB 4 48 KiB 0 516 GiB
default.rgw.log 4 32 242 B 2 32 KiB 0 516 GiB
default.rgw.control 5 32 0 B 8 0 B 0 516 GiB
default.rgw.meta 6 32 0 B 0 0 B 0 516 GiB

root@PVE12:~# ceph osd df tree

1726197472603.png
The recovery process stop, never reach more clean pg, the problem start with un spectect reboots of the physical servers, because and faulty UPS.

Really, apreciate any help. :

thanks

Heiber
 
With replication size 3 you have a max usable capacity of 5.2 TB as this is the capacity of your smallest host.
Your OSDs are very unbalanced across the hosts which is not good for such a small cluster.
This is why the OSDs in pve10 reach the nearfull ratio and refuse to take any more data.

You could increase these ratios but you really need to balance the capacity across the nodes more evenly. Basically you are wasting 20TB in PVE12 because of this.

See https://docs.ceph.com/en/reef/rados/troubleshooting/troubleshooting-osd/#no-free-drive-space
 
  • Like
Reactions: hbustos and aaron
With replication size 3 you have a max usable capacity of 5.2 TB as this is the capacity of your smallest host.
Your OSDs are very unbalanced across the hosts which is not good for such a small cluster.
This is why the OSDs in pve10 reach the nearfull ratio and refuse to take any more data.

You could increase these ratios but you really need to balance the capacity across the nodes more evenly. Basically you are wasting 20TB in PVE12 because of this.

See https://docs.ceph.com/en/reef/rados/troubleshooting/troubleshooting-osd/#no-free-drive-space
Thanks, appreciate a lot, ¿You think I can change 3 copies for 2 and rebalance de pool ?

Heiber
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!