Backup of privileged LXC fails

Mrt12

Well-Known Member
May 19, 2019
156
19
58
45
CH
Good day,
I have this newly created LXC that is privileged and that has a large dataset of ~3TB attached as mountpoint.
I also installed Proxmox Backup Server and I backup all my VMs and LXCs to it. It works super fine for everything, except this particular LXC! It refuses to backup in "Suspend" mode, only when I use the "Stop" mode it works, but I would prefer suspend.

So could someone explain to me why for this particular LXC he refuses to use suspend mode. Here is my backup job output:



INFO: starting new backup job: vzdump 106 --notification-mode notification-system --remove 0 --node pve0 --notes-template '{{guestname}}' --storage backup --mode snapshot
INFO: Starting Backup of VM 106 (lxc)
INFO: Backup started at 2025-08-11 19:08:58
INFO: status = running
INFO: CT Name: fileserver
INFO: including mount point rootfs ('/') in backup
INFO: including mount point mp0 ('/srv') in backup
INFO: found old vzdump snapshot (force removal)
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: suspend vm to make snapshot
INFO: create storage snapshot 'vzdump'
zfs_mount_at() failed: directory is not emptyumount: /mnt/vzsnap0/srv: not mounted.
command 'umount -l -d /mnt/vzsnap0/srv' failed: exit code 32
INFO: resume vm
INFO: guest is online again after <1 seconds
ERROR: Backup of VM 106 failed - command 'mount -o ro -t zfs tank/userdata/subvol-106-disk-1@vzdump /mnt/vzsnap0//srv' failed: exit code 2
INFO: Failed at 2025-08-11 19:08:59
INFO: Backup job finished with errors
INFO: notified via target `mail-to-root`
TASK ERROR: job errors


What I find particularly interesting is that he wants to mount the ZFS dataset under /mnt/vzsnap0. I checked and this directory exists and is empty, so he could mount, but still refuses. Why?

Note that this LXC is a privileged one.
 
INFO: Starting Backup of VM 180 (lxc)
INFO: Backup started at 2025-08-12 04:17:21
INFO: status = running
INFO: CT Name: mail
INFO: including mount point rootfs ('/') in backup
INFO: including mount point mp0 ('/opt/kerio') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: suspend vm to make snapshot
INFO: create storage snapshot 'vzdump'
zfs_mount_at() failed: directory is not emptyumount: /mnt/vzsnap0/opt/kerio: not mounted.
command 'umount -l -d /mnt/vzsnap0/opt/kerio' failed: exit code 32
INFO: resume vm
INFO: guest is online again after 3 seconds
ERROR: Backup of VM 180 failed - command 'mount -o ro -t zfs Storage-Default/subvol-180-disk-1@vzdump /mnt/vzsnap0//opt/kerio' failed: exit code 2
INFO: Failed at 2025-08-12 04:17:26
INFO: Backup job finished with errors
INFO: notified via target MailServer-Stoss
TASK ERROR: job errors


I have the same issue on all running containers, no difference if priviliged or unpriviliged.
if the container is stopped, everything is fine, because there is no need for a snapshot.

Anyone else?
 
Found the issue.
rm -rf /Storage-Default/subvol-180-disk-0/opt/kerio/* -> Fixed it.
The root disk (the mountpoint of it, was not empty.

In my Case there were whyever empy folders, dunno why, maybe from beginning before i created a subvolume to that mountpoint inside the container...

But for enyone else, dont simply delete the contents of your mountpoint on the rootfs subvolume, check first.

Cheers :-)
 
  • Like
Reactions: UdoB
I fixed my problem by excluding the one mountpoint that has 2TB of data in it. So the backup effectively only includes the root disk. In this way, the backup runs fine.
I also realise that the Proxmox Backup Server brings no advantage when it comes to incremental backups of containers. For VMs, it is very fast thanks to the dirty bitmap, but for LXCs there is no real advantage. For example, the initial backup of my 2TB LXC took 4 hours. I expected the further incremental backups take much less, if only few data are changed. However, every backup of the 2TB LXC takes the same time, I think this is because he still needs to somehow scan for changes.
So I excluded the 2TB disk from the LXC backup, and do the backup manually with zfs send. Fun fact: with ZFS send, I do incremental backups of my entire 14TB pool in few seconds because it is a true incremental backup. In the old days when I used something like rsync, it took at least one hour to scan all files, even if only a few are changed. I assume that the LXC backup works in a similar way, it needs to scan the entire dataset, break it up into chunks, scan them for changes and transfer only the changes. So IMO for LXCs this is not really useful, as there is no time advantage for the backups and it is much more efficient to do the backups with a custom script using zfs send / receive.

Or am I completely wrong?
 
I fixed my problem by excluding the one mountpoint that has 2TB of data in it. So the backup effectively only includes the root disk. In this way, the backup runs fine.
I also realise that the Proxmox Backup Server brings no advantage when it comes to incremental backups of containers. For VMs, it is very fast thanks to the dirty bitmap, but for LXCs there is no real advantage. For example, the initial backup of my 2TB LXC took 4 hours. I expected the further incremental backups take much less, if only few data are changed. However, every backup of the 2TB LXC takes the same time, I think this is because he still needs to somehow scan for changes.
So I excluded the 2TB disk from the LXC backup, and do the backup manually with zfs send. Fun fact: with ZFS send, I do incremental backups of my entire 14TB pool in few seconds because it is a true incremental backup. In the old days when I used something like rsync, it took at least one hour to scan all files, even if only a few are changed. I assume that the LXC backup works in a similar way, it needs to scan the entire dataset, break it up into chunks, scan them for changes and transfer only the changes. So IMO for LXCs this is not really useful, as there is no time advantage for the backups and it is much more efficient to do the backups with a custom script using zfs send / receive.

Or am I completely wrong?
You are only wrong in one Point, that you didnt fixed the key issue, why it falls, instead you simply excluded the Volume.

Otherwise i agreed with everything else. Lxc backups are terrible.

Zfs send/receive ist truly amazing, but no GUI... So its Not a solution for "Not Linux affine" admins in our company for example.