Hi everyone.
I find myself in this situation and I don't know how to fix it. I did not benchmark. I think it's a copy or move issue, but I'm not sure. this is the second time this has happened to me. The first time I moved all the vm's and i formatted the whole pool. In this second cluster i have the same problem. I updated ceph to 16.2.7 a few days ago and it did some cleaning, but not enough. Do i need to format this second cluster as well?
thank you
I find myself in this situation and I don't know how to fix it. I did not benchmark. I think it's a copy or move issue, but I'm not sure. this is the second time this has happened to me. The first time I moved all the vm's and i formatted the whole pool. In this second cluster i have the same problem. I updated ceph to 16.2.7 a few days ago and it did some cleaning, but not enough. Do i need to format this second cluster as well?
thank you
Code:
ceph --version
ceph version 16.2.7 (f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable)
ceph df detail
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
ssd 105 TiB 54 TiB 51 TiB 51 TiB 48.63
TOTAL 105 TiB 54 TiB 51 TiB 51 TiB 48.63
--- POOLS ---
POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED (DATA) (OMAP) %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
device_health_metrics 1 2 0 B 0 B 0 B 0 0 B 0 B 0 B 0 14 TiB N/A N/A N/A 0 B 0 B
test-storage 2 512 52 TiB 52 TiB 18 MiB 14.15M 50 TiB 50 TiB 53 MiB 54.36 14 TiB N/A N/A N/A 0 B 0 B
rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR
test-storage 50 TiB 14149461 11291125 42448383 0 0 0 4887302639 126 TiB 9651281185 229 TiB 0 B 0 B
device_health_metrics 0 B 0 0 0 0 0 0 0 0 B 0 0 B 0 B 0 B
total_objects 14149461
total_used 51 TiB
total_avail 54 TiB
total_space 105 TiB
rados ls -p test-storage | grep rbd_data | sort | awk -F. '{ print $2 }' |uniq -c |sort -n |wc -l
105
rbd ls test-storage | wc -l
86