S3/Backblaze B2 not deleting GC chunks

watergard

New Member
Jun 24, 2024
3
1
1
On PBS 4.0.18 using S3 storage (Backblaze B2), I've had issues with garbage collection not deleting chunks.
The prune and GC jobs are both running fine, and reporting that everything deleted correctly, however I do not see any change in B2.
I'm not using any versioning or object locking.
I made that mistake during the PBS beta, and thought that was the cause (I've since rebuilt and re-synced).
I've also ensured I have the S3 quirk set 'Skip If-None-Match header'

I've deleted a VM I no longer need (and set the GC access time to a low number to ensure it gets cleaned up).
In B2, I saw no change in the bucket size, and saw that the total file count increased.
I double checked the logs in journalctl and saw nothing noteworthy with this GC run

Any assistance would be greatly appreciated.
I can upload any additional logs as needed.

1762824130768.png
1762824142202.png
 

Attachments

Last edited:
Hi,
your screenshots show File Lifecycle: Keep all versions. So I suppose that is why the chunks are removed by GC (as seen from the task log) but the latest and all older versions kept back by the S3 provider. So you will have to set lifecycle rules for the bucket as described in https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
 
Any specific guidance on that?
My worry is having chunks age out in the lifecycle, when they should be retained via the PBS retention policy.

Or is the behavior that it’ll keep active chunks, and mark old ones for deletion, where B2’s lifecycle will only clean up the chunks removed via GC?

Thanks for the help!
 
After some additional testing, setting the lifecycle to "Keep only the last version" did the trick.
This lifecycle setting is the same as a custom rule on the root, a null hide after upload, and a value of 1 day on delete after hide.

I'm going to run a verify on some backups, but everything appears to be ok for now
 
  • Like
Reactions: michaelhart