PBS 2.4-2 datastore filling up despite pruning/GC running


New Member
Mar 17, 2023
I have a PBS data store that is slowly filling up. I am backing up 200+ VM's with a retention time of 30 days. The pruning job is scheduled to run every night at midnight, and the garbage collection is scheduled to run "daily" (there is no option to specify and exact date/time in the menu dropdown).

All jobs are successful (backup, prune, GC) every night and there does not appear to be any issue, except that I am slowly running out of disk space. I have checked that there is 30 days -worth of data per VM and there are no VM snapshots. According to the GUI, I will run out of disk space within 12 days.

What is going on and how can I free up this disk space? Everything is working fine and there are no errors.
is it possible that your backups add more data everyday then gets pruned? the backup logs contain information about added chunks (you'd have to add them up yourself though), the GC log about removed chunks..
Thanks for the reply. Where can I find these logs?

Also, am I correct in setting it to "keep-last 30"? I don't need long term retention and the data on these VM's rarely changes. Should I use keep-daily instead?
Where can I find these logs?
You go to "Administration -> Tasks" and then double-click on the task you want to see the logs for.

And how did you set up your datastore? Maybe discard/trim isn't working, so the space of the chunks that the GC job is deleting never gets freed up?
Last edited:
Thanks. I've taken a look at the logs. I also changed the retention from 30 days to 23 days and manually ran the prune and GC jobs. After running the GC jobs, the last few lines were:

2023-06-27T14:30:17-04:00: Removed garbage: 0 B
2023-06-27T14:30:17-04:00: Removed chunks: 0
2023-06-27T14:30:17-04:00: Pending removals: 2.989 TiB (in 2047547 chunks)
2023-06-27T14:30:17-04:00: Original data usage: 203.219 TiB
2023-06-27T14:30:17-04:00: On-Disk usage: 7.793 TiB (3.83%)
2023-06-27T14:30:17-04:00: On-Disk chunks: 5297533
2023-06-27T14:30:17-04:00: Deduplication factor: 26.08
2023-06-27T14:30:17-04:00: Average chunk size: 1.543 MiB
2023-06-27T14:30:17-04:00: TASK OK

However, the disk space did not get freed up. I took a look at the individual VM backups and see things like this:

2023-06-28T04:32:29-04:00: Checksum: 29543e4c507cb12ab246d66ef0ce6573753207ee60a7a005a27e83e83a23b59d
2023-06-28T04:32:29-04:00: Size: 4487905280
2023-06-28T04:32:29-04:00: Chunk count: 1070
2023-06-28T04:32:29-04:00: Upload size: 4471128064 (99%)
2023-06-28T04:32:29-04:00: Duplicates: 4+2 (0%)
2023-06-28T04:32:29-04:00: Compression: 35%
2023-06-28T04:32:29-04:00: successfully closed fixed index 1
2023-06-28T04:32:29-04:00: add blob "/mnt/backup/store1/vm/XXXX/2023-06-28T08:31:04Z/index.json.blob" (326 bytes, comp: 326)
2023-06-28T04:32:40-04:00: successfully finished backup
2023-06-28T04:32:41-04:00: backup finished successfully
2023-06-28T04:32:41-04:00: TASK OK

Does this mean that the backup of this VM wrote new data of 4487905280 bytes (4.4GB)?

The store is configured using 2x 16TB Ironwolf's hard drives, so I don't think Trim/Discard are in play.

After changing the retention policy and running GC, do I need to wait 24 hours for the space to be freed?
yeah, the log says only 6 chunks were re-used for that backup! 4 by the client, 2 more by the server.

if the span between last GC before the prune and GC after prune was not more than 24h, then yes, you need to wait for the GC to handle the pruned backups.


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!