I played around with a nested proxmox instance and set up a ceph cluster there with 3 nodes and 3 OSDs.
ceph df shows 50% Usage although all the pools are empty.
Can I clean that up somehow?
ceph df shows 50% Usage although all the pools are empty.
Can I clean that up somehow?
Code:
# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 600 GiB 300 GiB 300 GiB 300 GiB 50.05
TOTAL 600 GiB 300 GiB 300 GiB 300 GiB 50.05
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.mgr 1 1 577 KiB 2 1.7 MiB 0 90 GiB
cephfs_data_data 2 32 0 B 0 0 B 0 90 GiB
cephfs_data_metadata 3 32 36 KiB 22 194 KiB 0 90 GiB
volumes 5 32 0 B 0 0 B 0 90 GiB
Code:
# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR
.mgr 1.7 MiB 2 0 6 0 0 0 539 1.0 MiB 419 6.4 MiB 0 B 0 B
cephfs_data_data 0 B 0 0 0 0 0 0 0 0 B 0 0 B 0 B 0 B
cephfs_data_metadata 194 KiB 22 0 66 0 0 0 73 117 KiB 110 81 KiB 0 B 0 B
volumes 0 B 0 0 0 0 0 0 0 0 B 0 0 B 0 B 0 B
total_objects 24
total_used 300 GiB
total_avail 300 GiB
total_space 600 GiB