Ceph consuming too much space

DrillSgtErnst

Well-Known Member
Jun 29, 2020
91
7
48
So I created an Ceph with 5 Nodes and 30 Disks.

I copied my drives from NFS. All Servers together admounted about 2,5TB of space on the NFS.
On Ceph they are claiming around 23TB now. Replica Rule 3/2

I would expect factor 3 + some Ceph Tax with Logs and stuff. But ten times the amount seems a bit much. All Hard Disks are virtio SCSI with Discard active.
I am a bit startled. I am at 50% full right now, was expecting way less.
Any ideas?
 
Code:
--- RAW STORAGE ---
CLASS    SIZE   AVAIL    USED  RAW USED  %RAW USED
nvme   51 TiB  29 TiB  21 TiB    21 TiB      42.13
TOTAL  51 TiB  29 TiB  21 TiB    21 TiB      42.13
 
--- POOLS ---
POOL  ID  PGS   STORED   (DATA)  (OMAP)  OBJECTS     USED   (DATA)   (OMAP)  %USED  MAX AVAIL  QUOTA OBJECTS  QUOTA BYTES  DIRTY  USED COMPR  UNDER COMPR
.mgr   1    1  2.8 MiB  2.8 MiB     0 B        2  8.5 MiB  8.5 MiB      0 B      0    8.6 TiB            N/A          N/A    N/A         0 B          0 B
ceph   2  512  7.3 TiB  7.3 TiB  60 KiB    1.86M   22 TiB   22 TiB  180 KiB  46.10    8.6 TiB            N/A          N/A    N/A         0 B          0 B

I had around 3TB with all disks on ZFS.
I feel like the discard is not working at all.
 
Last edited:
do you have tried to run fstrim in your vm os ? #fstrim -a
fstrim is not available for virtio block

Ich will run further diagnostic later. It seems the Thinprovision is at fault with virtio blk instead of virtio scsi with ssd emulation