GC force, or other method to free up space?

Vampire337

New Member
Apr 25, 2025
7
0
1
I made a mistake: I added a second instance of my Proxmox Backup Server datastore to Datacenter, Storage, then updated my backup job in Datacenter, Backup to use this new instance of the datastore. My mistake was that I neglected to go to the Encryption tab and upload my key. So, the next time the backup ran, it saw literally everything as changed from the previous encrypted backup, and it subsequently filled the datastore, which had around 2TB of 8TB free.

I have corrected the issue by adding the certificate, and am trying to remove the 2TB worth of bad backup. Earlier this week I manually deleted the unencrypted backups, and I noticed the usage did not change. I read about garbage collection and how you have to wait a day for something to happen. I'm a little fuzzy on the details there, but I waited a few days and my backup datastore is still full. I checked the scheduled GC job in PBS and it says "Error: atime safety check failed: update atime failed for chunk/file... ENOSPC: No space left on device". Okay, that's accurate, but I believe I need to run the garbage collection to free up space. I tried running a manual garbage collect from CLI, "proxmox-backup-manager garbage-collect start backups-8tb". Same error.

Is there a different method or command I should be using to clean up the mess I made? Or will I need to wipe all the actual backups from my backup drive and start over?
 
As in my original post, I've deleted multiple backups, today and several days ago (well beyond the 24h+5m threshold). Garbage collect will not run because it detects there's not enough disk space. Deleting backups alone has not freed up any space. So I'm in a catch 22.

I've read through the links you posted, but I don't see anything that stands out as helping in my particular situation.
 
how is your datastore's storage configured? if you cannot add more space, you really need to free up some space (for example, by deleting some more snapshots - but you need to ensure no new backups or syncs are attempted that immediately undo this!) and then trigger a manual GC..
 
I have a TrueNAS server with an 8TB mirrored array that's shared via NFS across a QSFP 40GbE connection to my Proxmox VE server. There is no way to physically increase the size of the physical drives unless I buy new drives and replace the ones I have in this backups array:
1758297456214.png

Proxmox Backup Server is running as a VM on that Proxmox host with two virtual NICs; one NIC is connected to my 1GbE network and the other to the 40GbE link. The NFS share is mounted to /mnt/backups-8tb/ using fstab, and that folder was added as a Datastore in PBS.

1758297786625.png
1758297668519.png

There are no snapshots involved because I don't have any VMs stored on this drive, and deleting backups clearly does not affect storage space if GC won't run. I am also running the latest version of PBS; I just installed updates to 4.0.14 to confirm I'm not out of space on the VM.

Today I tried garbage collect again, and again it says I have not enough space to attempt garbage collect.
1758297597613.png

No backups have been able to execute in about a week now, due to the no space status. I did note that the schedule was still active, so I've disabled that this morning. I did set the GC Access-Time Cutoff to 1 minute to try to ensure that's not my issue:

1758297866802.png
 
Last edited:
Since my last post I've tried moving files off the backups datastore. I moved the /vm/ and /.cache/0000 through /.cache/000f to the VM storage (in a folder called /mnt/temp). My intention was to add symbolic links to point to the new location of these files, so I'm not risking losing any data. Despite having moved around 900MB off of my backups-8tb drive, it shows as having 0b free in the SSH session as well as on the TrueNAS dashboard. Apparently removing files doesn't work the way I had expected.

1758304405064.png

1758304414774.png

1758304448169.png
 
that is also very dangerous!

you can try moving some of the .chunk hierarchy to other storage (without adding symlinks) and then run GC. it will complain about missing chunks, but should hopefully clear up enough space so that you can move them back and re-run GC a second time.

don't attempt to move any backup metadata directories, if you do something that PBS doesn't understand GC might clear out *all* the chunks. you can try to free up space by *removing* backup metadata dirs (what I called "snapshots" above in PBS terminology, it doesn't mean storage-level snapshots), but that means permanent removal of (some) backups.
 
I get the warnings about the "danger", but... nothing I've tried or has been suggested has worked so far, and it looks like I'm just going to have to blow away the entire backups datastore and start over.

I've deleted multiple backups from multiple VMs, even deleted all backups for one VM, and not a single byte of space has been freed up. As indicated in my screenshot above, I did try moving some of the .chunk hierarchy to other storage (without adding symlinks because there is no space) and then run GC. It does not complain about missing chunks. It says I have no space and the GC will absolutely not run. Even after moving 900MB off the datastore, I remain stuck here:
1758550768845.png
 
In case anyone else reads this looking for an answer in a similar situation, I gave in and wiped all my backups, and we're starting over with a fresh backup. Put some junk files on your datastore drive so you can delete them in case something happens, I guess.

Fortunately my test lab isn't a production environment, but it will be hard to recommend this to my industrial customers.
 
that sounds like a storage problem to be honest, I don't know why removing files has no actual effect for your setup.