[SOLVED] Garbage Collection fails

manuelkamp

New Member
May 10, 2022
29
7
3
Hi, I use PBS 2.4-1 for two years now. I recently (this week) added another Backup Solution for Clients (UrBackup, installed on PBS machine), because I wanted to use the available diskspace im my ZFS Pool. For that, I made a new folder "urBackup" in the root path of the ZFS datastore (where also the vm and ct folders from PBS are).

Now the last GC failed with the following error: "TASK ERROR: cannot continue garbage-collection safely, permission denied on: "/mnt/datastore/Backup/urBackup/clientname""

How can I tell the GC to ignore the folder urBackup (since it does its own "GC")? I do not want to change any permissions/owners on the folder, because of the unwanted side effect, that PBS may to GC in this folder, which I do not want. Also it may cause problems for UrBackup, if I change permissions there - maybe also destroying the possibility of client restoring.
 
Last edited:
You shouldn't put any other data in your PBS datastore folder. If you want to store other stuff on that ZFS pool, use datasets. So don't create folders in the root of your pool, instead create for example a "pbs" and a "urbackup" dataset on your pool. Then enable maintaince mode for the datastore, move the whole content of your datastore folder to the dataset (YourPool/pbs), edit the /etc/proxmox-backup/datastore.cfg to match the new path and then enable then disable maintaince mode again.
 
  • Like
Reactions: Neobin
@Dunuin: I've never moved datastores that big around, how long does that take for ~30 TB?. By the current rate it will take several hours to move everything. I now cancelled the process, because it takes too long, since there are no Backups and Verifications possible during the move process. Is there a faster way to move than on file system level (mv)?

@dignus: but outside is not on the zfs pool - outside is the mirrored sys ssd which is not big enough to store clients backups.
 
Last edited:
Is there a faster way to move than on file system level (mv)?
No. When moving it on block level using "zfs send | zfs recv" you would also move that UrBackup data as you can only replicate the whole filesystem. Another point why you should create datasets instead of just folders when storing different stuff.
@dignus: but outside is not on the zfs pool - outside is the mirrored sys ssd which is not big enough to store clients backups.
Yes, thats why you need to move that datastore so you can store stuff outside while still being on the pool.
 
so in short, there is no way around, than taking my pool offline for almost 20 hours (quick calculation)? Well, then that would be not the route i take. There is no way to exclude the urBackup folder from PBS GC? In worst case, what does it mean, if i am not able to run any GC anymore?

Creating a dataset for UrBackup and moving only that files (would be way faster) is not an option? (Leaving pbs stuff where it is now?)

PS: I also saw an increase on the overall pool size usage while mv, which basically means, I need at least the same amount of free disk space than already used? Then I am blocked with 69% disk usage...
 
Last edited:
imo, you need a new dataset dedicaced to urbackup then move its content to.
moreover, ircc, urbackup use zfs snapshot feature. check if not snapshot taken over the pbs datastore.
 
In worst case, what does it mean, if i am not able to run any GC anymore?
GC frees up data. If you can't run the GC, nothing will ever be deleted and you pool will grow and finally fail and get read-only when reaching 100%.

Creating a dataset for UrBackup and moving only that files (would be way faster) is not an option? (Leaving pbs stuff where it is now?)
That is an option, but you would need to mount that dataset in an alternate path, otherwise it would still be mounted as a folder inside your PBS datastore. See the mountpoint property:
mountpoint=path|none|legacyControls the mount point used for this file system. See the Mount Points section of zfsconcepts(7) for more information on how this property is used.
When the mountpoint property is changed for a file system, the file system and any children that inherit the mount point are unmounted. If the new value is legacy, then they remain unmounted. Otherwise, they are automatically remounted in the new location if the property was previously legacy or none, or if they were mounted before the property was changed. In addition, any shared file systems are unshared and shared in the new location.

PS: I also saw an increase on the overall pool size usage while mv, which basically means, I need at least the same amount of free disk space than already used? Then I am blocked with 69% disk usage...
Keep thin-provisioning in mind. Deleted data won't be deleted immediately. By default, ZFS will only fstrim once per month. You could do a zpool trim YourPool to force a trim to immediately free up some space.
 
thanks, I'll do that with the different mountpoint for urbackup dataset and leave pbs untouched. Regarding trim, yes you are totally right, thanks for this hint too (may forget it again when i need it in the future :) )
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!