Garbage Collect Jobs

flowbergit

Renowned Member
Jun 3, 2015
42
0
71
After upgrade to proxmox Backup Server 3.3 I've got some fail jobs with status: TASK ERROR: Aborting GC for safety reasons: nested datastores not allowed: 'BACKUP350' already in "/mnt/datastore/BACKUP350".

I know that I have some tenants as a new one datastores inside this but how to resolve it.
 
Forum members would need this info in order to assist.
Some problems are thorny to solve. This is probably not one of those.
I expect you'll get an explicit answer here. Just ... help us help you.

zpool status

zfs list

df -h

cat /etc/proxmox-backup/datastore.cfg


BTW ... do get this fixed. If GC isn't running, your datastore may fill up.
 
Last edited:
Code:
root@backup350:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                   32G     0   32G   0% /dev
tmpfs                 6.3G  4.1M  6.3G   1% /run
rpool/ROOT/pbs-1      900G   42G  858G   5% /
tmpfs                  32G     0   32G   0% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
rpool                 858G  128K  858G   1% /rpool
rpool/ROOT            858G  128K  858G   1% /rpool/ROOT
BACKUP350             267T  114T  154T  43% /mnt/datastore/BACKUP350
BACKUP350/u044228535   94G   85G  9.2G  91% /mnt/datastore/BACKUP350/u044228535
BACKUP350/u302298350  373G   72G  301G  20% /mnt/datastore/BACKUP350/u302298350
BACKUP350/u386916143   94G   65M   94G   1% /mnt/datastore/BACKUP350/u386916143
BACKUP350/u118989474  1.9T  1.4T  448G  76% /mnt/datastore/BACKUP350/u118989474
BACKUP350/POC_GDS     932G  384K  932G   1% /mnt/datastore/BACKUP350/POC_GDS
BACKUP350/u073196258   94G  4.3G   89G   5% /mnt/datastore/BACKUP350/u073196258
BACKUP350/u205127408   94G   65M   94G   1% /mnt/datastore/BACKUP350/u205127408
BACKUP350/u615719492  932G   49G  883G   6% /mnt/datastore/BACKUP350/u615719492
BACKUP350/u417906176   94G   65M   94G   1% /mnt/datastore/BACKUP350/u417906176
BACKUP350/u824497261  280G  217G   64G  78% /mnt/datastore/BACKUP350/u824497261
BACKUP350/u352717488  280G  280G     0 100% /mnt/datastore/BACKUP350/u352717488
BACKUP350/u450323955   94G   94G     0 100% /mnt/datastore/BACKUP350/u450323955
BACKUP350/u541983027  187G  187G     0 100% /mnt/datastore/BACKUP350/u541983027
BACKUP350/u732816845  932G  630G  302G  68% /mnt/datastore/BACKUP350/u732816845
BACKUP350/u635995932   94G   60G   35G  64% /mnt/datastore/BACKUP350/u635995932
BACKUP350/u402549110  5.5T  5.5T     0 100% /mnt/datastore/BACKUP350/u402549110
tmpfs                 6.3G     0  6.3G   0% /run/user/0
root@backup350:~# zpool status
  pool: BACKUP350
 state: ONLINE
  scan: scrub repaired 0B in 2 days 18:24:59 with 0 errors on Tue Dec 10 18:49:00 2024
config:

        NAME                        STATE     READ WRITE CKSUM
        BACKUP350                   ONLINE       0     0     0
          raidz2-0                  ONLINE       0     0     0
            wwn-0x5000c50094c3c547  ONLINE       0     0     0
            wwn-0x5000c50094ad0d5b  ONLINE       0     0     0
            wwn-0x5000c50094b6ec63  ONLINE       0     0     0
            wwn-0x5000c50094b6eb23  ONLINE       0     0     0
            wwn-0x5000c50094c3aceb  ONLINE       0     0     0
            wwn-0x5000c50094c3a54f  ONLINE       0     0     0
            wwn-0x5000c50094b6de13  ONLINE       0     0     0
            wwn-0x5000c50094b2a1ef  ONLINE       0     0     0
            wwn-0x5000c50094a7967b  ONLINE       0     0     0
            wwn-0x5000c50094b6e3ab  ONLINE       0     0     0
            wwn-0x5000c50094c3ad57  ONLINE       0     0     0
            wwn-0x5000c50094b6c397  ONLINE       0     0     0
            wwn-0x5000c50094c3ad83  ONLINE       0     0     0
            wwn-0x5000c50094c39ed3  ONLINE       0     0     0
            wwn-0x5000c50094c3b0a3  ONLINE       0     0     0
            wwn-0x5000c50094c3b0eb  ONLINE       0     0     0
            wwn-0x5000c5009441982b  ONLINE       0     0     0
            wwn-0x5000c50094b6d99f  ONLINE       0     0     0
            wwn-0x5000c50094b6901f  ONLINE       0     0     0
            wwn-0x5000c50094b6e8e7  ONLINE       0     0     0
            wwn-0x5000c50094b29d13  ONLINE       0     0     0
            wwn-0x5000c50086b1c11b  ONLINE       0     0     0
            wwn-0x5000c50094b6e8a3  ONLINE       0     0     0
            wwn-0x5000c50094c39fc3  ONLINE       0     0     0
            wwn-0x5000c50094c3bd37  ONLINE       0     0     0
            wwn-0x5000c50094c3c12b  ONLINE       0     0     0
            wwn-0x5000c50094b6cdb3  ONLINE       0     0     0
            wwn-0x5000c50094c3ac43  ONLINE       0     0     0
            wwn-0x5000c50094c3c597  ONLINE       0     0     0
            wwn-0x5000c50094b27e37  ONLINE       0     0     0
            wwn-0x5000c50094c3c38f  ONLINE       0     0     0
            wwn-0x5000c50094b6e7ef  ONLINE       0     0     0
            wwn-0x5000c50094b6e587  ONLINE       0     0     0
            wwn-0x5000c50094c3b027  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 00:02:14 with 0 errors on Sun Dec  8 00:26:21 2024
config:

        NAME                                                   STATE     READ WRITE CKSUM
        rpool                                                  ONLINE       0     0     0
          mirror-0                                             ONLINE       0     0     0
            ata-Samsung_SSD_870_EVO_1TB_S75CNX0W927499K-part3  ONLINE       0     0     0
            ata-Samsung_SSD_870_EVO_1TB_S75CNX0W926993E-part3  ONLINE       0     0     0

errors: No known data errors
root@backup350:~# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
BACKUP350              122T   153T   113T  /mnt/datastore/BACKUP350
BACKUP350/POC      277K   931G   277K  /mnt/datastore/BACKUP350/POC
BACKUP350/u044228535  84.0G  9.12G  84.0G  /mnt/datastore/BACKUP350/u044228535
BACKUP350/u073196258  4.22G  88.9G  4.22G  /mnt/datastore/BACKUP350/u073196258
BACKUP350/u118989474  1.38T   447G  1.38T  /mnt/datastore/BACKUP350/u118989474
BACKUP350/u205127408  64.5M  93.1G  64.5M  /mnt/datastore/BACKUP350/u205127408
BACKUP350/u302298350  71.6G   301G  71.6G  /mnt/datastore/BACKUP350/u302298350
BACKUP350/u352717488   279G     0B   279G  /mnt/datastore/BACKUP350/u352717488
BACKUP350/u386916143  64.5M  93.1G  64.5M  /mnt/datastore/BACKUP350/u386916143
BACKUP350/u402549110  5.46T     0B  5.46T  /mnt/datastore/BACKUP350/u402549110
BACKUP350/u417906176  64.5M  93.1G  64.5M  /mnt/datastore/BACKUP350/u417906176
BACKUP350/u450323955  93.1G     0B  93.1G  /mnt/datastore/BACKUP350/u450323955
BACKUP350/u541983027   186G     0B   186G  /mnt/datastore/BACKUP350/u541983027
BACKUP350/u615719492  48.4G   883G  48.4G  /mnt/datastore/BACKUP350/u615719492
BACKUP350/u635995932  59.1G  34.0G  59.1G  /mnt/datastore/BACKUP350/u635995932
BACKUP350/u732816845   630G   302G   630G  /mnt/datastore/BACKUP350/u732816845
BACKUP350/u824497261   216G  63.1G   216G  /mnt/datastore/BACKUP350/u824497261
rpool                 41.6G   858G    96K  /rpool
rpool/ROOT            41.5G   858G    96K  /rpool/ROOT
rpool/ROOT/pbs-1      41.5G   858G  41.5G  /
root@backup350:~# cat /etc/proxmox-backup/datastore.cfg
datastore: BACKUP350
        path /mnt/datastore/BACKUP350

datastore: u352717488
        gc-schedule 14:00
        path /mnt/datastore/BACKUP350/u352717488

datastore: u635995932
        gc-schedule *:0/30
        path /mnt/datastore/BACKUP350/u635995932

datastore: u824497261
        gc-schedule sat 18:15
        path /mnt/datastore/BACKUP350/u824497261

datastore: u044228535
        gc-schedule 18:00
        path /mnt/datastore/BACKUP350/u044228535
        verify-new true

datastore: u450323955
        path /mnt/datastore/BACKUP350/u450323955

datastore: u402549110
        gc-schedule daily
        path /mnt/datastore/BACKUP350/u402549110

datastore: u386916143
        path /mnt/datastore/BACKUP350/u386916143

datastore: u205127408
        path /mnt/datastore/BACKUP350/u205127408

datastore: u615719492
        gc-schedule hourly
        path /mnt/datastore/BACKUP350/u615719492

datastore: u732816845
        gc-schedule 0/3:00
        path /mnt/datastore/BACKUP350/u732816845

datastore: u541983027
        path /mnt/datastore/BACKUP350/u541983027

datastore: u118989474
        gc-schedule sat 18:15
        path /mnt/datastore/BACKUP350/u118989474
        verify-new true

datastore: u073196258
        gc-schedule 0/3:00
        path /mnt/datastore/BACKUP350/u073196258

datastore: u302298350
        gc-schedule 0/3:00
        path /mnt/datastore/BACKUP350/u302298350

datastore: u417906176
        path /mnt/datastore/BACKUP350/u417906176
 
you have nested datastores, which is not supported. PBS recently got upgraded to detect and warn this.

datastore: BACKUP350

move that datastore to another location if it is used, or remove it if it isn't..
 
  • Like
Reactions: Johannes S
Let me try rephrasing that.


This bit encapsulates all the following bits.

datastore: BACKUP350
path /mnt/datastore/BACKUP350


See? The next one is inside it.

datastore: u352717488
gc-schedule 14:00
path /mnt/datastore/BACKUP350/u352717488


So your problem is that first one.
If you don't have to mount that, then remove it.
If you do need to backup stuff to that root directory ... well, like fabian said, you need to move it.
 
Last edited:
  • Like
Reactions: Johannes S
Very good morning to all friends. I have 1 PBS server, where I backup 4 different offices/servers, each company with its own folder containing only its respective backup files. So as mentioned above, is it no longer possible to use it like this? Or did I misunderstand?

NOTE: Here is giving the following error:

1736692255363.png
cat /etc/proxmox-backup/datastore.cfg
datastore: pbs
gc-schedule 9:00
path /mnt/datastore/pbs

datastore: w-tech
comment
gc-schedule 10:00
path /mnt/datastore/pbs/w-tech

datastore: mimonet
comment
gc-schedule 11:00
path /mnt/datastore/pbs/mimonet

datastore: allmar
comment
gc-schedule 21:00
path /mnt/datastore/pbs/allmar

datastore: rbl
comment
gc-schedule 15:00
path /mnt/datastore/pbs/rbl
 
Very good morning to all friends. I have 1 PBS server, where I backup 4 different offices/servers, each company with its own folder containing only its respective backup files. So as mentioned above, is it no longer possible to use it like this? Or did I misunderstand?

You can use one datastore with multiple namespaces (e.G. one namespace for each company), but not one datastore under another.
Are your datastores different physical medias (e.g. each on a different disk)? Than I would add each one seperately unter /mnt/datastore e.G. /mnt/datastore/pbs /mnt/datastore/rbl /mnt/datastore/allmar etc. If they are all under the same physical disk I would have just one datastore /mnt/pbs and set up several namespaces under it.
 
Thanks for responding.
I only have 1 3Tera disk, and I would like to have several different spaces or folders. It was working like this, each company had a name and each server was configured for each folder, without having access to another folder.

It stopped working and now it just gives this error
1736694405011.png
1736694576196.png
 
Last edited:
I hope you have not just one disk but at least a mirror (with ZFS or hardware RAID).
However: I would add the needed namespaces to the PBS datastore like this:
1736695024469.png

Afterwards I would setup a sync job to sync the data from the datastore to the same named namespace:
1736695143645.png

Afterwards I would check if every backup from the old datastore is now in the new namespace, then I would remove the old datastore and the syncjob.

I wonder how you ended in such a weird setup, why didn't you use namespaces right from the start?
 
I did it following a YouTube tutorial, it worked until recently, I didn't know it was wrong like that.
 
I did it following a YouTube tutorial, it worked until recently, I didn't know it was wrong like that.
Never ever follow blindly youtube or reddit tutorials, they tend to be outdated or flat wrong. The manual on pbs.proxmox.com is quite understandable and the most relieable source of information.
 
Thanks for the tips, it's opened a new path now. I'll have to synchronize the servers to PBS again, with their folders correctly, from what I understand.
 
If you have another pbs or disc you could first setup the namespaces on it, sync your data to it. Then resetup everything on your current PBS and resync the data back.
If you don't have it you could rent a vserver with enough storage space, sync the data to it. After rebuildibg your pbs you woukd Republik the data from the vserver. Afterwards destroy the vserver to limit the costs ( cloud Providers charge time).
If you don't have another PBS or disc please rethink your backup strategy:
https://www.veeam.com/blog/321-backup-rule.html
https://pbs.proxmox.com/docs/storage.html#the-3-2-1-rule-with-proxmox-backup-server
 
Vou reinstalar meu PBS do zero, depois farei tudo do zero. Obrigado novamente pelas informações. Tenha um ótimo 2025.
 
  • Like
Reactions: Johannes S
Hi. PBS 3.3 is installed. Backups from several PBS are configured, and monitoring is set up in zabbix/ I observe something incomprehensible: in zabbix, disk utilization 100% issues a warning every night at 00:00 and drops to the required mark after 15 minutes, although in the PBS system, according to the schedule, I see that disk occupancy is as usual at this time. is this possible because garbage collect job and Prune Job are configured? If so, where can I read about it? and is this behavior normal?
 
Hello @all,

we also faced the "nested Backups" Problem after updating our infrastructure to latest 3.3.3 Version.
Since we do not have enough space on the production backup server (70% in Use) to "simple" move the main datastore to a zfs subset I am trying to find a solution for this.

After doing following steps on a testing machine and rebooting it, the pbs seems to work properly and also the verify and garbage-collect jobs work fine (for now)

Bash:
root@pbs:~# zfs list
NAME         USED  AVAIL  REFER  MOUNTPOINT
pbs24       20.3G  18.1G   160K  /mnt/datastore/pbs24
pbs24/sub1   287M  9.72G   287M  /mnt/datastore/pbs24/sub1
pbs24/sub2   290M  9.72G   290M  /mnt/datastore/pbs24/sub2

# Stopped the Proxmox-Backup Services
systemctl stop proxmox-backup proxmox-backup-proxy.service
# Created a zfs Subset
zfs create pbs24/main

# Enter directory and set owner
cd /mnt/datastore/pbs24/
chown -R backup:backup main

# synced the files with rsync (caution: removes the source files!!!!)
rsync -a --remove-source-files .chunks ./main/
rsync -a --remove-source-files ./ct ./main/
rsync -a --remove-source-files ./ns ./main/
rsync -a --remove-source-files ./vm ./main/

# moved the static files
mv .gc-status main/
mv .lock main/

# edited the datastore configuration
vi /etc/proxmox-backup/datastore.cfg

[From]
datastore: pbs24
        gc-schedule *:0/5
        path /mnt/datastore/pbs24/
[To]
datastore: pbs24
        gc-schedule *:0/5
        path /mnt/datastore/pbs24/main
[End]
# restarted the backup-services and tested gui
systemctl start proxmox-backup proxmox-backup-proxy.service

root@pbs:~# zfs list
NAME         USED  AVAIL  REFER  MOUNTPOINT
pbs24       20.3G  18.1G   160K  /mnt/datastore/pbs24
pbs24/main   306M  18.1G   306M  /mnt/datastore/pbs24/main
pbs24/sub1   287M  9.72G   287M  /mnt/datastore/pbs24/sub1
pbs24/sub2   290M  9.72G   290M  /mnt/datastore/pbs24/sub2

# node rebooted
reboot

garbace_collect.png

What do you say to this solution? Did I miss something?
Any improvements?

Best Regards
Tom
 
Last edited:
basically what you did - move the metadata dirs and the .chunks dir and the state files to the new path and change the config accordingly ;)
 
i also ran into this problem since the upgrade.

Code:
### zfs list

NAME         USED  AVAIL  REFER  MOUNTPOINT
backuppool  10.2T  31.1T  10.2T  /mnt/datastore/backuppool


### zpool status

pool: backuppool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
    The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 07:58:58 with 0 errors on Sun May 11 08:22:59 2025
config:

    NAME                        STATE     READ WRITE CKSUM
    backuppool                  ONLINE       0     0     0
      raidz2-0                  ONLINE       0     0     0
        wwn-0x5000cca0bbe2f42b  ONLINE       0     0     0
        wwn-0x5000cca0bbe54ad8  ONLINE       0     0     0
        wwn-0x5000cca0bbe2ca65  ONLINE       0     0     0
        wwn-0x5000cca0bbe54a32  ONLINE       0     0     0
        wwn-0x5000cca0bbe2cdc9  ONLINE       0     0     0
        wwn-0x5000cca0bbe49cb8  ONLINE       0     0     0
        wwn-0x5000cca0bbe2cae9  ONLINE       0     0     0
        wwn-0x5000cca0bbe26887  ONLINE       0     0     0

errors: No known data errors

Code:
### cat /etc/proxmox-backup/datastore.cfg

datastore: backuppool
    path /mnt/datastore/backuppool

datastore: edv_vm_backups
    comment Alle EDV VMs
    gc-schedule daily
    notify gc=error,sync=error,verify=error
    notify-user martin@pbs
    path /mnt/datastore/backuppool/edv_vm_backups

datastore: martin_backups
    comment Testmaschinen und Privat
    gc-schedule daily
    notify gc=error,sync=error,verify=error
    notify-user martin@pbs
    path /mnt/datastore/backuppool/martin_backups

datastore: nc_data_filebackup
    comment Nextcloud Daten auf File Ebene
    gc-schedule daily
    notify gc=error,sync=error,verify=error
    notify-user martin@pbs
    path /mnt/datastore/backuppool/nc_data_filebackup

datastore: nfs_export
    comment
    gc-schedule daily
    path /mnt/datastore/backuppool/nfs_export

datastore: www_fileadmin
    comment fileadmin und mysql www
    gc-schedule daily
    path /mnt/datastore/backuppool/www_fileadmin


so i guess my problem is, that i added a datastore named `backuppool` mounted in the root of my zfs. I think i should delete the datastore `backuppool` from within the proxmox backup server GUI. Only thing i am afraid of is that this deletes all the nested datastore contents as well.

how to solve this issue? currently i cannot do any backups because i cannot add any datastore that is not "nested" since i need to put it into my `/mnt/datastore/backuppool`. otherwise i will be sitting on local disk with very limited space.

Code:
### df -h

Filesystem            Size  Used Avail Use% Mounted on
udev                   32G     0   32G   0% /dev
tmpfs                 6.3G  2.6M  6.3G   1% /run
/dev/mapper/pbs-root  131G  9.9G  114G   8% /
tmpfs                  32G     0   32G   0% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
efivarfs              304K  236K   64K  79% /sys/firmware/efi/efivars
/dev/sda2             511M  324K  511M   1% /boot/efi
backuppool             42T   11T   32T  25% /mnt/datastore/backuppool
tmpfs                 6.3G     0  6.3G   0% /run/user/0


help is very much appreciated. i guess my main question is: can i safely delete the `backuppool` datastore from within the GUI without deleting all existing backups in the other datastores?