[SOLVED] Ceph Pacific. RADOS. Objects are not deleted, but only orphaned

SaymonDzen

New Member
Aug 31, 2020
20
3
3
37
Moskow
RADOS
ceph version 16.2.6 (1a6b9a05546f335eeeddb460fdc89caadf80ac7a) pacific (stable)
added file to bucket
radosgw-admin --bucket=support-files bucket radoslist | wc -l 96 ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 44 TiB 44 TiB 4.7 GiB 4.7 GiB 0.01 TOTAL 44 TiB 44 TiB 4.7 GiB 4.7 GiB 0.01 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 21 1 242 KiB 6 727 KiB 0 14 TiB .rgw.root 22 32 3.2 KiB 6 72 KiB 0 14 TiB default.rgw.log 23 32 61 KiB 209 600 KiB 0 14 TiB default.rgw.control 24 32 0 B 8 0 B 0 14 TiB default.rgw.meta 25 8 1.9 KiB 9 96 KiB 0 14 TiB default.rgw.buckets.index 26 8 47 KiB 22 142 KiB 0 14 TiB default.rgw.buckets.data 27 32 377 MiB 96 1.1 GiB 0 14 TiB default.rgw.buckets.non-ec 28 32 110 KiB 0 330 KiB 0 14 TiB test.bucket.data 30 32 0 B 0 0 B 0 14 TiB
removed file from bucket
radosgw-admin --bucket=support-files bucket radoslist | wc -l 0 ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 44 TiB 44 TiB 4.7 GiB 4.7 GiB 0.01 TOTAL 44 TiB 44 TiB 4.7 GiB 4.7 GiB 0.01 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 21 1 242 KiB 6 727 KiB 0 14 TiB .rgw.root 22 32 3.2 KiB 6 72 KiB 0 14 TiB default.rgw.log 23 32 89 KiB 209 696 KiB 0 14 TiB default.rgw.control 24 32 0 B 8 0 B 0 14 TiB default.rgw.meta 25 8 1.9 KiB 9 96 KiB 0 14 TiB default.rgw.buckets.index 26 8 47 KiB 22 142 KiB 0 14 TiB default.rgw.buckets.data 27 32 377 MiB 95 1.1 GiB 0 14 TiB default.rgw.buckets.non-ec 28 32 110 KiB 0 330 KiB 0 14 TiB test.bucket.data 30 32 0 B 0 0 B 0 14 TiB
As you can see, the pool was not actually cleaned up.

radosgw-admin --bucket=support-files bucket check, radosgw-admin --bucket=support-files gc process, radosgw-admin --bucket=support-files gc list - clean

rados -p default.rgw.buckets.data ls | wc -l 95 rgw-orphan-list default.rgw.buckets.data cat ./rados-20211018094020.intermediate | wc -l 95 ``` Objects name mask in ./rados-20211018094020.intermediate <bucket-id>__shadow_<file_name>~* or <bucket-id>__multipart_<file_name>~* There were no errors in the rados log during the deletion ``` 2021-10-18T13:09:06.536+0300 7fcf0564a700 1 ====== starting new request req=0x7fd0e446c820 ===== 2021-10-18T13:09:06.608+0300 7fcf6ef1d700 1 ====== req done req=0x7fd0e446c820 op status=0 http_status=204 latency=0.072000369s ====== 2021-10-18T13:09:06.608+0300 7fcf6ef1d700 1 beast: 0x7fd0e446c820: 10.77.1.185 - nextcloud [18/Oct/2021:13:09:06.536 +0300] "DELETE /support-files/debian-11.0.0-amd64-netinst.iso HTTP/1.1" 204 0 - "S3 Browser/10.0.9 ([URL]https://s3browser.com[/URL])" - latency=0.072000369s
Any idea what the problem might be?
 
Last edited:
This not a bug. It takes 2 hours for the pool to clear (the default setting can be changed in the config https://docs.ceph.com/en/latest/radosgw/config-ref/#garbage-collection-settings) To clean up forcibly you need to run the garbage collection with the --include-all parameter

radosgw-admin gc list --include-all
radosgw-admin gc process --include-all

also, the behavior of the collector when deleting a pool via webgui is not entirely obvious, so before deleting it, you should manually run it with the --include-all key

https://tracker.ceph.com/issues/52964
 
  • Like
Reactions: VictorSTS

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!