Proxmox Backup Server 4.0 released!

I did actually mention the enterprise repos in my longwinded post there. ;)

I disabled the enterprise repo in the GUI (on the repository page) before I started doing the update process, but didn't mention that explicitly. (You can see the "disabled" tag in my config files.)

I'd recommend doing that before starting the process if you don't have a subscription just to make things simpler.

I still converted the enterprise repo config over to the new deb822 format, just so it would be in the format PBS expects. The old format might break in the future.
Hah well that teaches me for stopping short. However, I did have them disabled already, so that's what I found so odd. I don't know if updating the sources.list command re-enabled them for some reason, but I did have those disabled for sure.
 
Hah well that teaches me for stopping short. However, I did have them disabled already, so that's what I found so odd. I don't know if updating the sources.list command re-enabled them for some reason, but I did have those disabled for sure.
If you followed the official guide, yes, it would have re-enabled the Enterprise repo when you created the new deb822-based enterprise source file.

I made sure it was disabled again just before I did the last "apt update && apt dist-upgrade" command.
 
If you followed the official guide, yes, it would have re-enabled the Enterprise repo when you created the new deb822-based enterprise source file.

I made sure it was disabled again just before I did the last "apt update && apt dist-upgrade" command.
Yeah, again it was odd - I didn't update the source files until just a few minutes ago. ;) Before that, I had left it alone for a couple of days because of the ARC cache I originally posted about. Today, I was going through and updating most of my VM/LXC and updated the sources, then went back and did PBS.
 
Hi with PBS 4, there is an issue with backup'ing running LXC Containers.

When the Container is Running, i get this:
Code:
{{guestname}}
INFO: starting new backup job: vzdump 180 --notes-template '{{guestname}}' --notification-mode notification-system --storage Backup-SAS --remove 0 --mode snapshot --node pve-hdr
INFO: Starting Backup of VM 180 (lxc)
INFO: Backup started at 2025-08-12 04:22:34
INFO: status = running
INFO: CT Name: mail
INFO: including mount point rootfs ('/') in backup
INFO: including mount point mp0 ('/opt/kerio') in backup
INFO: found old vzdump snapshot (force removal)
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: suspend vm to make snapshot
INFO: create storage snapshot 'vzdump'
zfs_mount_at() failed: directory is not emptyumount: /mnt/vzsnap0/opt/kerio: not mounted.
command 'umount -l -d /mnt/vzsnap0/opt/kerio' failed: exit code 32
INFO: resume vm
INFO: guest is online again after 11 seconds
ERROR: Backup of VM 180 failed - command 'mount -o ro -t zfs Storage-Default/subvol-180-disk-1@vzdump /mnt/vzsnap0//opt/kerio' failed: exit code 2
INFO: Failed at 2025-08-12 04:22:45
INFO: Backup job finished with errors
INFO: notified via target `MailServer-Stoss`
TASK ERROR: job errors

zfs get overlay Storage-Default/subvol-180-disk-0
NAME PROPERTY VALUE SOURCE
Storage-Default/subvol-180-disk-0 overlay on default

zfs get overlay Storage-Default/subvol-180-disk-1
NAME PROPERTY VALUE SOURCE
Storage-Default/subvol-180-disk-1 overlay on default

did i miss something? maybe its discussed already somewhere....

EDIT: Fixed, see 2 post below.
 
Last edited:
Code:
INFO: create storage snapshot 'vzdump'
zfs_mount_at() failed: directory is not emptyumount: /mnt/vzsnap0/opt/kerio: not mounted.
command 'umount -l -d /mnt/vzsnap0/opt/kerio' failed: exit code 32

Another report:
 
  • Like
Reactions: Ramalama
Another report:
Found the issue.
rm -rf /Storage-Default/subvol-180-disk-0/opt/kerio/* -> Fixed it.
The root disk (the mountpoint of it, was not empty.

In my Case there were whyever empy folders, dunno why, maybe from beginning before i created a subvolume to that mountpoint inside the container...

But for enyone else, dont simply delete the contents of your mountpoint on the rootfs subvolume, check first.

Cheers :-)
 
But i found another pretty hefty Bug:

If a backup runs, the snapshot of an LXC container gets created always under /mnt/vzsnap0
So in the case like above, where the snapshot wont get removed, and the directory not deleted (because of the error), all following lxc backups will fail in that backup job, because /mnt/vzsnap0 exists already.

What if you do multiple backups of lxc containers at the same time?, there needs to be more intelligence.

Code:
INFO: starting new backup job: vzdump 180 --notes-template '{{guestname}}' --notification-mode notification-system --storage Backup-SAS --remove 0 --mode snapshot --node pve-hdr
INFO: Starting Backup of VM 180 (lxc)
INFO: Backup started at 2025-08-12 04:22:34
INFO: status = running
INFO: CT Name: mail
INFO: including mount point rootfs ('/') in backup
INFO: including mount point mp0 ('/opt/kerio') in backup
INFO: found old vzdump snapshot (force removal)
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: suspend vm to make snapshot
INFO: create storage snapshot 'vzdump'
zfs_mount_at() failed: directory is not emptyumount: /mnt/vzsnap0/opt/kerio: not mounted.
command 'umount -l -d /mnt/vzsnap0/opt/kerio' failed: exit code 32
INFO: resume vm
INFO: guest is online again after 11 seconds
ERROR: Backup of VM 180 failed - command 'mount -o ro -t zfs Storage-Default/subvol-180-disk-1@vzdump /mnt/vzsnap0//opt/kerio' failed: exit code 2
INFO: Failed at 2025-08-12 04:22:45
INFO: Backup job finished with errors
INFO: notified via target `MailServer-Stoss`
TASK ERROR: job errors

What i mean is, because of this, all fullowing lxc backups failed:
VMIDNameStatusTimeSizeFilename
114 example-DC01 ok 26s80.004 GiBvm/114/2025-08-12T01:35:01Z
116 example-DC ok 10s80 GiBvm/116/2025-08-12T01:35:27Z
127 example-MT ok 1m 1s1000 GiBvm/127/2025-08-12T01:35:38Z
129 example-2023 ok 21s399 GiBvm/129/2025-08-12T01:36:39Z
131 example-Neu ok 2s32 GiBvm/131/2025-08-12T01:37:00Z
132 example-SRV ok 10s120.004 GiBvm/132/2025-08-12T01:37:02Z
134 exdienste ok 1s30 GiBvm/134/2025-08-12T01:37:12Z
136 example-KH ok 2s30 GiBvm/136/2025-08-12T01:37:13Z
137unifiok17s5.879 GiBct/137/2025-08-12T01:37:15Z
144E-Managerok1m 57s350 GiBvm/144/2025-08-12T01:37:32Z
145EMx ok24s300 GiBvm/145/2025-08-12T01:39:29Z
153docker-1ok1m 8s28.282 GiBct/153/2025-08-12T01:39:53Z
157 ex-Dienste ok 1s32 GiBvm/157/2025-08-12T01:41:01Z
158 Example-2019 ok 3m 8s200 GiBvm/158/2025-08-12T01:41:02Z
163 EX-Dienste ok 3s50 GiBvm/163/2025-08-12T01:44:10Z
164wazuhok2m 3s69.936 GiBct/164/2025-08-12T01:44:13Z
170revproxy-intok14s3.066 GiBct/170/2025-08-12T01:46:16Z
175 printing2019 ok 27s60 GiBvm/175/2025-08-12T01:46:30Z
176 Pr-example ok 10s200 GiBvm/176/2025-08-12T01:46:57Z
178Toensok17s32.004 GiBvm/178/2025-08-12T01:47:07Z
180mailerr5s0 B
182exa-ng ok17s250.004 GiBvm/182/2025-08-12T01:47:29Z
1106 TS-example ok 39s250.004 GiBvm/1106/2025-08-12T01:47:46Z
2206dmz-proxyerr<0.1s0 B
2208dmz-dockererr<0.1s0 B
2210nextclouderr<0.1s0 B
5001ovh-mgmterr<0.1s0 B
9138 DokuEx ok 2m 10s2.002 TiBvm/9138/2025-08-12T01:48:25Z
9710 Example-W7 ok 9s350 GiBvm/9710/2025-08-12T01:50:36Z
9711 Example-DB ok 13s120 GiBvm/9711/2025-08-12T01:50:45Z

Total running time: 15m 57s
Total size: 5.979 TiB


In this case, 180 left /mnt/vzsnap0 not empty, because of the error, and all following err are because the snapshot folder /mnt/vzsnap0 exists....
so that snapshot folders should have a random or at least dedicated name, like /mnt/vzsnap-180-0

Cheers.
 
  • Like
Reactions: dMopp
B2 seems to use a lot of Class-C transactions, seeing over double the amount of transactions compared to rclone's `--fast-list` feature. Is there something in the works to reduce the amount of Class Cs? The specific transaction is `api list file names called`
 
Last edited:
Regarding the new memory behaviour caused bt ARC discussed earlier in this thread, I completely get it and have zero issues with how it works, but what I do have issue with is the way it's presented in the GUI

1754978014332.png

This is screaming, you have a RAM issue, no matter how you look at it. Even htop shows this better:

1754978293842.png

I hope that GUI can be revised at some point soon or this subject will continually pop up.
 
But i found another pretty hefty Bug:

If a backup runs, the snapshot of an LXC container gets created always under /mnt/vzsnap0
So in the case like above, where the snapshot wont get removed, and the directory not deleted (because of the error), all following lxc backups will fail in that backup job, because /mnt/vzsnap0 exists already.

What if you do multiple backups of lxc containers at the same time?, there needs to be more intelligence.

it's not possible to run multiple backups in parallel on a single node, there's a node-global lock protecting against that. incomplete error handling can always happen and require manual intervention, especially when interacting with storage. please file a bug on bugzilla with the relevant details, maybe we can improve the error handling there!
 
  • Like
Reactions: Ramalama
Code:
INFO: Starting Backup of VM 108 (lxc)
INFO: Backup started at 2025-08-12 11:49:23
INFO: status = running
INFO: CT Name: docker01.servers.quolke.net
INFO: including mount point rootfs ('/') in backup
INFO: including mount point mp0 ('/dmopp') in backup
INFO: including mount point mp1 ('/var/lib/docker') in backup
INFO: found old vzdump snapshot (force removal)
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: suspend vm to make snapshot
INFO: create storage snapshot 'vzdump'
zfs_mount_at() failed: directory is not emptyumount: /mnt/vzsnap0/dmopp: not mounted.
command 'umount -l -d /mnt/vzsnap0/dmopp' failed: exit code 32
INFO: resume vm
INFO: guest is online again after 11 seconds
ERROR: Backup of VM 108 failed - command 'mount -o ro -t zfs rpool/data/subvol-108-disk-1@vzdump /mnt/vzsnap0//dmopp' failed: exit code 2


Having that issue with an docker lxc since upgrade to pbs4 (and proxmox to 9)


After manually removing the snapshot:

Code:
INFO: Starting Backup of VM 108 (lxc)
INFO: Backup started at 2025-08-12 11:52:52
INFO: status = running
INFO: CT Name: docker01.servers.quolke.net
INFO: including mount point rootfs ('/') in backup
INFO: including mount point mp0 ('/dmopp') in backup
INFO: including mount point mp1 ('/var/lib/docker') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: suspend vm to make snapshot
INFO: create storage snapshot 'vzdump'
zfs_mount_at() failed: directory is not emptyumount: /mnt/vzsnap0/dmopp: not mounted.
command 'umount -l -d /mnt/vzsnap0/dmopp' failed: exit code 32
INFO: resume vm
INFO: guest is online again after 10 seconds
ERROR: Backup of VM 108 failed - command 'mount -o ro -t zfs rpool/data/subvol-108-disk-1@vzdump /mnt/vzsnap0//dmopp' failed: exit code 2
 
Last edited:
see higher up in the thread (and don't run docker in containers, this is not supported for a reason!).
 
  • Like
Reactions: Johannes S
I read the answers but couldn't find fitting solution. And yes i know, Docker on LXC has Limits but for my personal sue case, its fine :)

Update: Had to delete the subfolder (where ever it comes from) AND shutdown/startup the lxc again. Now the backup is fine. Well, fixed is fixed :D
 
Last edited:
I'm seeing VERY high CPU usage since the update to 4.0 - happened after last round of updates - This doesn't seem normal?

1755001521263.png



1755001430673.png
 
v2 of that patch got applied already! which version of PBS are you running at the moment?
 
4.0.13 should contain the fix, it's not yet in pbs-enterprise, "just" pbs-no-subscription
 
  • Like
Reactions: flames