I have recently migrated all VMs on a PVE cluster from a hdd pool to a ssd pool. Now that the hdd pool is empty (no VMs), ceph still reports 31% usage on the pool.
This ceph has been in use for a while now. Upgraded from ceph 12 to 14 to 15.
This is the second cluster where I noticed large amounts of orphan data after a migration, so what are the steps to detect and purge orphan data from pools ?
This ceph has been in use for a while now. Upgraded from ceph 12 to 14 to 15.
This is the second cluster where I noticed large amounts of orphan data after a migration, so what are the steps to detect and purge orphan data from pools ?
Bash:
# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 11 TiB 7.6 TiB 3.2 TiB 3.3 TiB 29.93
ssd 17 TiB 5.4 TiB 12 TiB 12 TiB 69.20
TOTAL 28 TiB 13 TiB 15 TiB 15 TiB 54.10
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
ssd-pool 1 256 4.0 TiB 1.08M 12 TiB 82.26 888 GiB
hdd-pool 2 256 1.1 TiB 315.01k 3.2 TiB 31.41 2.4 TiB
device_health_metrics 3 1 24 MiB 25 71 MiB 0 1.4 TiB
# rados -p hdd-pool ls | grep -v "rbd_data"
rbd_object_map.8e3370a3e51bbc.000000000000004f
rbd_directory
rbd_object_map.8ed54fd26ecf05
rbd_object_map.8ed54fd26ecf05.000000000000004e
rbd_children
rbd_info
rbd_header.8e3370a3e51bbc
rbd_object_map.8e3370a3e51bbc
rbd_trash
rbd_header.8ed54fd26ecf05