Proxmox 9: LXC backup fails when host directory is mounted (backup hangs at snapshot)

rofrofrof1

New Member
Jan 6, 2025
9
2
3
Since upgrading to Proxmox 9, lxc backups can no longer be created if a host directory is mounted as a mount point on the container.
The backup hangs at snapshot creation. The container is frozen until the backup is cancelled.

When I detach the mp0, the backup runs as expected. Also I can create a snapshot of the zfs dataset at any time from shell.

Does anyone have any idea how I can get the backups working again?

Code:
INFO: Starting Backup of VM 213 (lxc)
INFO: Backup started at 2025-09-13 01:22:03
INFO: status = running
INFO: CT Name: *************
INFO: including mount point rootfs ('/') in backup
INFO: excluding bind mount point mp0 ('/backups') from backup (not a volume)
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'

Container config:

Code:
arch: amd64
cpulimit: 8
features: nesting=1,mknod=1
hostname: *************
memory: 32768
mp0: /mnt/pve/freenas/plesk_backup/213,mp=/backups
nameserver: 127.0.0.1
net0: *************
onboot: 1
ostype: centos
rootfs: local-zfs:subvol-213-disk-0,size=500G
swap: 512
unprivileged: 1

Package versions:

Code:
proxmox-ve: 9.0.0 (running kernel: 6.14.11-2-pve)
pve-manager: 9.0.9 (running version: 9.0.9/117b893e0e6a4fee)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.14.11-2-pve-signed: 6.14.11-2
proxmox-kernel-6.14: 6.14.11-2
proxmox-kernel-6.14.11-1-pve-signed: 6.14.11-1
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
amd64-microcode: 3.20250311.1
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx10
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.10
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.7
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-1
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.2
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.1
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.11
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-4
pve-ha-manager: 5.0.4
pve-i18n: 3.6.0
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.21
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.4-pve1
 
I am also seeing the same issue, though the specifics differ a bit. In my case, the backup doesn't freeze the container but it fails on the first container that has a mount point, then subsequent containers also fail.

Error messages in log:

Code:
201: 2025-09-13 06:46:52 INFO: Starting Backup of VM 201 (lxc)
201: 2025-09-13 06:46:52 INFO: status = running
201: 2025-09-13 06:46:52 INFO: CT Name: fileserver1
201: 2025-09-13 06:46:52 INFO: including mount point rootfs ('/') in backup
201: 2025-09-13 06:46:52 INFO: including mount point mp0 ('/rust') in backup
201: 2025-09-13 06:46:52 INFO: found old vzdump snapshot (force removal)
201: 2025-09-13 06:46:53 INFO: backup mode: snapshot
201: 2025-09-13 06:46:53 INFO: ionice priority: 7
201: 2025-09-13 06:46:53 INFO: suspend vm to make snapshot
201: 2025-09-13 06:46:53 INFO: create storage snapshot 'vzdump'
201: 2025-09-13 06:46:54 INFO: resume vm
201: 2025-09-13 06:46:54 INFO: guest is online again after 1 seconds
201: 2025-09-13 06:46:54 ERROR: Backup of VM 201 failed - command 'mount -o ro -t zfs rust/subvol-201-disk-0@vzdump /mnt/vzsnap0//rust' failed: exit code 2

202: 2025-09-13 06:46:54 INFO: Starting Backup of VM 202 (lxc)
202: 2025-09-13 06:46:54 INFO: status = running
202: 2025-09-13 06:46:54 INFO: CT Name: nextcloud
202: 2025-09-13 06:46:54 INFO: including mount point rootfs ('/') in backup
202: 2025-09-13 06:46:54 INFO: including mount point mp0 ('/nextclouddata') in backup
202: 2025-09-13 06:46:54 ERROR: Backup of VM 202 failed - mount point '/mnt/vzsnap0' not empty

203: 2025-09-13 06:46:54 INFO: Starting Backup of VM 203 (lxc)
203: 2025-09-13 06:46:54 INFO: status = stopped
203: 2025-09-13 06:46:54 INFO: backup mode: stop
203: 2025-09-13 06:46:54 INFO: ionice priority: 7
203: 2025-09-13 06:46:54 INFO: CT Name: pihole
203: 2025-09-13 06:46:54 INFO: including mount point rootfs ('/') in backup
203: 2025-09-13 06:46:54 ERROR: Backup of VM 203 failed - mount point '/mnt/vzsnap0' not empty

First container that fails and causes downstream failures:

Code:
arch: amd64
cores: 2
features: nesting=1
hostname: fileserver1
memory: 4092
mp0: rust:subvol-201-disk-0,mp=/rust,backup=1,size=20000G
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.9.1,hwaddr=BC:24:11:BE:8E:3E,ip=192.168.9.3/24,type=veth
onboot: 1
ostype: debian
parent: vzdump
rootfs: ssd:subvol-201-disk-0,size=8G
swap: 4092
tags: container;lan
unprivileged: 1

[vzdump]
#vzdump backup snapshot
arch: amd64
cores: 2
features: nesting=1
hostname: fileserver1
memory: 4092
mp0: rust:subvol-201-disk-0,mp=/rust,backup=1,size=20000G
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.9.1,hwaddr=BC:24:11:BE:8E:3E,ip=192.168.9.3/24,type=veth
onboot: 1
ostype: debian
rootfs: ssd:subvol-201-disk-0,size=8G
snaptime: 1757853105
swap: 4092
tags: container;lan
unprivileged: 1

Package versions:

Code:
roxmox-ve: 9.0.0 (running kernel: 6.14.11-2-pve)
pve-manager: 9.0.9 (running version: 9.0.9/117b893e0e6a4fee)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.14.11-2-pve-signed: 6.14.11-2
proxmox-kernel-6.14: 6.14.11-2
proxmox-kernel-6.14.11-1-pve-signed: 6.14.11-1
proxmox-kernel-6.8.12-14-pve-signed: 6.8.12-14
proxmox-kernel-6.8: 6.8.12-14
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph-fuse: 19.2.3-pve1
corosync: 3.1.9-pve2
criu: 4.1.1-1
frr-pythontools: 10.3.1-1+pve4
ifupdown2: 3.3.0-1+pmx10
intel-microcode: 3.20250512.1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.0
libproxmox-backup-qemu0: 2.0.1
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.3
libpve-apiclient-perl: 3.4.0
libpve-cluster-api-perl: 9.0.6
libpve-cluster-perl: 9.0.6
libpve-common-perl: 9.0.10
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.4
libpve-network-perl: 1.1.7
libpve-rs-perl: 0.10.10
libpve-storage-perl: 9.0.13
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-1
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-3
openvswitch-switch: 3.5.0-1+b1
proxmox-backup-client: 4.0.14-1
proxmox-backup-file-restore: 4.0.14-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.1.2
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.2
proxmox-mini-journalreader: 1.6
proxmox-offline-mirror-helper: 0.7.1
proxmox-widget-toolkit: 5.0.5
pve-cluster: 9.0.6
pve-container: 6.0.11
pve-docs: 9.0.8
pve-edk2-firmware: 4.2025.02-4
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.3
pve-firmware: 3.16-4
pve-ha-manager: 5.0.4
pve-i18n: 3.6.0
pve-qemu-kvm: 10.0.2-4
pve-xtermjs: 5.5.0-2
qemu-server: 9.0.21
smartmontools: 7.4-pve1
spiceterm: 3.4.0
swtpm: 0.8.0+pve2
vncterm: 1.9.0
zfsutils-linux: 2.3.4-pve1
 
Last edited:
Same issue here, with LXC containers and ZFS. Here the first LXC container with additional mountpoint (MP0) gets backupped successfully, but all following backups of containers with MP0 are failing.

My thought on this topic: Is ist possible that the LXC backup "forgets" to unmount MP0 of the first LXC after backup run?

Proxmox 8.4 and PBS 3 worked fine for me. The error appeared after upgrading to PVE9 and PBS4.

First LXC backup of container with MP0:
Code:
INFO: Starting Backup of VM 114 (lxc)
INFO: Backup started at 2025-09-18 07:40:23
INFO: status = running
INFO: CT Name: checkmk
INFO: including mount point rootfs ('/') in backup
INFO: including mount point mp0 ('/opt') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: suspend vm to make snapshot
INFO: create storage snapshot 'vzdump'
INFO: resume vm
INFO: guest is online again after <1 seconds
INFO: creating Proxmox Backup Server archive 'ct/114/2025-09-18T05:40:23Z'
INFO: set max number of entries in memory for file-based backups to 1048576
INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/var/tmp/vzdumptmp23744_114/etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --include-dev /mnt/vzsnap0/./opt --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 114 --backup-time 1758174023 --change-detection-mode metadata --entries-max 1048576 --repository backup@pbs!backup@192.168.x.y:backup
INFO: Starting backup: ct/114/2025-09-18T05:40:23Z   
INFO: Client name: pvehome   
INFO: Starting backup protocol: Thu Sep 18 07:40:23 2025   
INFO: Downloading previous manifest (Wed Sep 17 23:08:02 2025)   
INFO: Upload config file '/var/tmp/vzdumptmp23744_114/etc/vzdump/pct.conf' to 'backup@pbs!backup@192.168.x.y:8007:backup' as pct.conf.blob   
INFO: Upload directory '/mnt/vzsnap0' to 'backup@pbs!backup@192.168.x.y:8007:backup' as root.mpxar.didx   
INFO: Using previous index as metadata reference for 'root.mpxar.didx'   
INFO: Change detection summary:
INFO:  - 76289 total files (7 hardlinks)
INFO:  - 71916 unchanged, reusable files with 2.853 GiB data
INFO:  - 4366 changed or non-reusable files with 632.381 MiB data
INFO:  - 59.497 MiB padding in 67 partially reused chunks
INFO: root.mpxar: had to backup 12.409 MiB of 12.409 MiB (compressed 2.194 MiB) in 12.47 s (average 1019.324 KiB/s)
INFO: root.ppxar: reused 2.911 GiB from previous snapshot for unchanged files (1607 chunks)
INFO: root.ppxar: had to backup 610.399 MiB of 3.529 GiB (compressed 78.99 MiB) in 12.60 s (average 48.438 MiB/s)
INFO: root.ppxar: backup was done incrementally, reused 2.933 GiB (83.1%)
INFO: Duration: 13.46s   
INFO: End Time: Thu Sep 18 07:40:36 2025   
INFO: adding notes to backup
INFO: cleanup temporary 'vzdump' snapshot
INFO: Finished Backup of VM 114 (00:00:14)
INFO: Backup finished at 2025-09-18 07:40:37

Subsequent LXCs with MP0, affected of the error:
Code:
INFO: Starting Backup of VM 115 (lxc)
INFO: Backup started at 2025-09-18 07:40:37
INFO: status = running
INFO: CT Name: dawarich
INFO: including mount point rootfs ('/') in backup
INFO: including mount point mp0 ('/opt') in backup
INFO: found old vzdump snapshot (force removal)
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: suspend vm to make snapshot
INFO: create storage snapshot 'vzdump'
zfs_mount_at() failed: directory is not emptyumount: /mnt/vzsnap0/opt: not mounted.
command 'umount -l -d /mnt/vzsnap0/opt' failed: exit code 32
INFO: resume vm
INFO: guest is online again after <1 seconds
ERROR: Backup of VM 115 failed - command 'mount -o ro -o acl -t zfs rpool/data/subvol-115-disk-1@vzdump /mnt/vzsnap0//opt' failed: exit code 2
INFO: Failed at 2025-09-18 07:40:37
INFO: Starting Backup of VM 116 (lxc)
INFO: Backup started at 2025-09-18 07:40:37
INFO: status = running
INFO: CT Name: zmb-ad
INFO: including mount point rootfs ('/') in backup
INFO: including mount point mp0 ('/backup') in backup
ERROR: Backup of VM 116 failed - mount point '/mnt/vzsnap0' not empty
INFO: Failed at 2025-09-18 07:40:37
INFO: Backup job finished with errors
 
Same issue here, with LXC containers and ZFS. Here the first LXC container with additional mountpoint (MP0) gets backupped successfully, but all following backups of containers with MP0 are failing.

My thought on this topic: Is ist possible that the LXC backup "forgets" to unmount MP0 of the first LXC after backup run?
Interesting. This is a slightly manifestation of what I'm seeing on my side. In my case, the first LXC (that backs up successfully) does not have a secondary mountpoint, just the root disk:

Code:
200: 2025-09-14 09:20:16 INFO: Starting Backup of VM 200 (lxc)
200: 2025-09-14 09:20:16 INFO: status = running
200: 2025-09-14 09:20:16 INFO: CT Name: unifi
200: 2025-09-14 09:20:16 INFO: including mount point rootfs ('/') in backup
200: 2025-09-14 09:20:16 INFO: backup mode: snapshot
200: 2025-09-14 09:20:16 INFO: ionice priority: 7
200: 2025-09-14 09:20:16 INFO: create storage snapshot 'vzdump'
200: 2025-09-14 09:20:16 INFO: creating Proxmox Backup Server archive 'ct/200/2025-09-14T13:20:16Z'
200: 2025-09-14 09:20:16 INFO: set max number of entries in memory for file-based backups to 1048576
200: 2025-09-14 09:20:16 INFO: run: lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client backup --crypt-mode=none pct.conf:/rust/tmpbackup/vzdumptmp713724_200//etc/vzdump/pct.conf root.pxar:/mnt/vzsnap0 --include-dev /mnt/vzsnap0/./ --skip-lost-and-found --exclude=/tmp/?* --exclude=/var/tmp/?* --exclude=/var/run/?*.pid --backup-type ct --backup-id 200 --backup-time 1757856016 --entries-max 1048576 --repository t430backup@pbs@192.168.19.4:vm_backup
200: 2025-09-14 09:20:16 INFO: Starting backup: ct/200/2025-09-14T13:20:16Z   
200: 2025-09-14 09:20:16 INFO: Client name: t430-pve   
200: 2025-09-14 09:20:16 INFO: Starting backup protocol: Sun Sep 14 09:20:16 2025   
200: 2025-09-14 09:20:16 INFO: Downloading previous manifest (Sun Sep 14 08:29:40 2025)   
200: 2025-09-14 09:20:16 INFO: Upload config file '/rust/tmpbackup/vzdumptmp713724_200//etc/vzdump/pct.conf' to 't430backup@pbs@192.168.19.4:8007:vm_backup' as pct.conf.blob   
200: 2025-09-14 09:20:16 INFO: Upload directory '/mnt/vzsnap0' to 't430backup@pbs@192.168.19.4:8007:vm_backup' as root.pxar.didx   
200: 2025-09-14 09:20:36 INFO: root.pxar: had to backup 123.359 MiB of 3.389 GiB (compressed 33.361 MiB) in 19.82 s (average 6.223 MiB/s)
200: 2025-09-14 09:20:36 INFO: root.pxar: backup was done incrementally, reused 3.268 GiB (96.4%)
200: 2025-09-14 09:20:36 INFO: Uploaded backup catalog (740.955 KiB)
200: 2025-09-14 09:20:37 INFO: Duration: 20.79s   
200: 2025-09-14 09:20:37 INFO: End Time: Sun Sep 14 09:20:37 2025   
200: 2025-09-14 09:20:37 INFO: adding notes to backup
200: 2025-09-14 09:20:38 INFO: cleanup temporary 'vzdump' snapshot
200: 2025-09-14 09:20:38 INFO: Finished Backup of VM 200 (00:00:22)

But in your case both mountpoints are backed up successfully, so it does appear that this is some kind of hiccup in the way that the mount is (or is not) being cleared.
 
In my case it must have to do with MP0. I have 5 LXC containers:

112
113
114+MP0
115+MP0
116+MP0

Backup runs fine on 112-114 and fails on the last two LXCs. So my assumption like already mentioned: Maybe backup "forgets" to unmount MP0 of 114 after backing up this container.
 
Im also having the problem:

Code:
INFO: starting new backup job: vzdump 108 --notification-mode notification-system --storage pbs-local-main --mode snapshot --node pve --remove 0 --notes-template '{{guestname}}'
INFO: Starting Backup of VM 108 (lxc)
INFO: Backup started at 2025-09-21 10:51:41
INFO: status = running
INFO: CT Name: cloud
INFO: including mount point rootfs ('/') in backup
INFO: including mount point mp1 ('/var/www/nextcloud-data') in backup
INFO: found old vzdump snapshot (force removal)
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: suspend vm to make snapshot
INFO: create storage snapshot 'vzdump'
zfs_mount_at() failed: directory is not emptyumount: /mnt/vzsnap0/var/www/nextcloud-data: not mounted.
command 'umount -l -d /mnt/vzsnap0/var/www/nextcloud-data' failed: exit code 32
INFO: resume vm
INFO: guest is online again after 1 seconds
ERROR: Backup of VM 108 failed - command 'mount -o ro -t zfs spinningrust/subvol-108-disk-0@vzdump /mnt/vzsnap0//var/www/nextcloud-data' failed: exit code 2
INFO: Failed at 2025-09-21 10:51:42
INFO: Backup job finished with errors

I tried moving the data to another MP to no avail.
After this container fails the others fails to backup too
The snapshot taken is not deleted after the fail.

Did some digging:

Code:
root@pve:~# mount -o ro -t zfs spinningrust/subvol-108-disk-0@vzdump /mnt/vzsnap0//var/www/nextcloud-data
filesystem 'spinningrust/subvol-108-disk-0@vzdump' cannot be mounted at '/mnt/vzsnap0//var/www/nextcloud-data' due to canonicalization error: No such file or directory

But if I create the directory manually it works:

Code:
mkdir -p /mount/vzsnap0/var/www/nextcloud-data
mount -o ro -t zfs spinningrust/subvol-108-disk-0@vzdump /mnt/vzsnap0//var/www/nextcloud-data

Also, why the double "//"? That creates only problems

If I take a backup in stop mode it works and also if I exclude the MP
I also tried to move the data to another MP, thinking it was corrupted in some way, and that did not help

Was working fine on PVE 8 and PBS 3
 
Also, why the double "//"? That creates only problems

I noticed this as well, was wondering what it "means" and whether it could be a contributing factor.

Would love if someone on the Proxmox team could weigh in on this issue. It seems like a problem which would be affecting a pretty wide swath of users.

As a side note, two days ago I upgraded my PBS instance from 3 to 4 in the hope something would change, but the problem is still occurring.
 
Unfortunately, I am no longer sure whether it is due to our mount points. Yesterday, the backup got stuck at the same point again:
Code:
INFO: create storage snapshot 'vzdump'.
Something is preventing the snapshot from being created.

I stopped the backup and restarted it (without detaching the mp0). The second run went smoothly.