Ceph full after adding disks

abzsol

Well-Known Member
Sep 18, 2019
94
6
48
Italy
www.abzsol.com
Hi, we have a problem after adding 1 osd for every node (intel SSD 3,8TB), after adding each disk it takes 7-8 hours to rebalance, but after it start complain about disk full and the pool doesn't expand in size.

There are 3 nodes, each with 2x300GB for OS and 4x3.8TB SSD for ceph. Doing maths i should have 3,45tb*4 = 13,8 TB net usable.

1592035119367.png

1592035153061.png


1592035347150.png
The arrow indicates when i added the new 3 disks on 12/06/2020 09:30

the total size of vms should be 4/5 TB max.

Code:
root@pve01:~# ceph df
RAW STORAGE:
    CLASS     SIZE       AVAIL      USED       RAW USED     %RAW USED
    ssd       42 TiB     15 TiB     27 TiB       27 TiB         64.93
    TOTAL     42 TiB     15 TiB     27 TiB       27 TiB         64.93
 
POOLS:
    POOL         ID     STORED     OBJECTS     USED       %USED     MAX AVAIL
    cephpool      1     14 TiB       3.89M     27 TiB     88.42       1.2 TiB


Code:
root@pve01:~# rados df
POOL_NAME   USED OBJECTS  CLONES   COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED   RD_OPS      RD  WR_OPS      WR USED COMPR UNDER COMPR
cephpool  27 TiB 3887215 2475512 11661645                  0       0        0 19148011 111 GiB 8627132 182 GiB        0 B         0 B

total_objects    3887215
total_used       27 TiB
total_avail      15 TiB
total_space      42 TiB

Thanks
 
i've tried to enable the balancer feature:

Code:
ceph mgr module enable balancer
ceph balancer mode upmap
ceph balancer on

and the space available increased a bit, but the balance is not perfect. I have noticed that the total space is slowly increasing over time. Maybe the balance happen when data is modified or rewritten?
1592203865183.png

1592204266091.png

Any hints?
 
Last edited: