Problem backing up LXC

Justin Toler

Renowned Member
Jan 31, 2016
22
0
66
36
Just recently I've ran into a permission issue backing up LXC to PBS lately. Actual VMs have no issue. Really don't understand the permission issue all the sudden.

Code:
INFO: starting new backup job: vzdump 106 --storage backups --notification-mode notification-system --node pve --remove 0 --notes-template '{{guestname}}' --mode snapshot
INFO: Starting Backup of VM 106 (lxc)
INFO: Backup started at 2025-09-12 04:05:46
INFO: status = running
INFO: CT Name: nginxproxymanager
INFO: including mount point rootfs ('/') in backup
ERROR: Backup of VM 106 failed - mkdir /mnt/vzsnap0: Permission denied at /usr/share/perl5/PVE/VZDump/LXC.pm line 198.
INFO: Failed at 2025-09-12 04:05:46
INFO: Backup job finished with errors
INFO: notified via target `mail-to-root`
INFO: notified via target `proxmobo-webhook`
TASK ERROR: job errors

Code:
INFO: starting new backup job: vzdump 116 --mode snapshot --remove 0 --notes-template '{{guestname}}' --storage backups --node pve --notification-mode notification-system
INFO: Starting Backup of VM 116 (qemu)
INFO: Backup started at 2025-09-12 04:02:02
INFO: status = running
INFO: VM Name: nextcloud
INFO: include disk 'scsi0' 'local-zfs:vm-116-disk-1' 15G
INFO: exclude disk 'efidisk0' 'local-zfs:vm-116-disk-0' (efidisk but no OMVF BIOS)
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating Proxmox Backup Server archive 'vm/116/2025-09-12T08:02:02Z'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task 'f79768ba-299b-4880-8ed2-b070430a733d'
INFO: resuming VM again
INFO: scsi0: dirty-bitmap status: created new
INFO:   5% (880.0 MiB of 15.0 GiB) in 3s, read: 293.3 MiB/s, write: 288.0 MiB/s
INFO: 100% (15.0 GiB of 15.0 GiB) in 33s, read: 598.7 MiB/s, write: 9.3 MiB/s
INFO: backup is sparse: 7.91 GiB (52%) total zero data
INFO: backup was done incrementally, reused 7.93 GiB (52%)
INFO: transferred 15.00 GiB in 33 seconds (465.5 MiB/s)
INFO: adding notes to backup
INFO: Finished Backup of VM 116 (00:00:35)
INFO: Backup finished at 2025-09-12 04:02:37
INFO: Backup job finished successfully
INFO: notified via target `proxmobo-webhook`
INFO: notified via target `mail-to-root`
TASK OK

Running:

Code:
proxmox-ve: 9.0.0 (running kernel: 6.14.11-1-pve)
pve-manager: not correctly installed (running version: 9.0.7/2bc61ed0ded1d877)
proxmox-kernel-6.14.11-1-pve-signed: 6.14.11-1
proxmox-kernel-6.14: 6.14.11-1
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
 
Well I just noticed this pve manager not installed correctly?


EDIT:

I got it corrected. Now shows:
Code:
proxmox-ve: 9.0.0 (running kernel: 6.14.11-1-pve)
pve-manager: 9.0.9 (running version: 9.0.9/117b893e0e6a4fee)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-6.14.11-1-pve-signed: 6.14.11-1
proxmox-kernel-6.14: 6.14.11-1
proxmox-kernel-6.14.8-2-pve-signed: 6.14.8-2
ceph-fuse: 19.2.3-pve2

Still having the issue though.
 
Last edited:
Hi!

Could you please post the output of findmnt, ls -l /, and ls -l /mnt?
 
Hi!

Could you please post the output of findmnt, ls -l /, and ls -l /mnt?
Code:
root@pve:~# findmnt
TARGET                                        SOURCE                       FSTYPE      OPTIONS
/                                             rpool/ROOT/pve-1             zfs         rw,noatime,xattr,noacl,casesensitive
├─/sys                                        sysfs                        sysfs       rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/security                      securityfs                   securityfs  rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/cgroup                            cgroup2                      cgroup2     rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/pstore                            pstore                       pstore      rw,nosuid,nodev,noexec,relatime
│ ├─/sys/firmware/efi/efivars                 efivarfs                     efivarfs    rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/bpf                               bpf                          bpf         rw,nosuid,nodev,noexec,relatime,mode=700
│ ├─/sys/kernel/debug                         debugfs                      debugfs     rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/tracing                       tracefs                      tracefs     rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/fuse/connections                  fusectl                      fusectl     rw,nosuid,nodev,noexec,relatime
│ └─/sys/kernel/config                        configfs                     configfs    rw,nosuid,nodev,noexec,relatime
├─/proc                                       proc                         proc        rw,relatime
│ └─/proc/sys/fs/binfmt_misc                  systemd-1                    autofs      rw,relatime,fd=37,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21572
│   └─/proc/sys/fs/binfmt_misc                binfmt_misc                  binfmt_misc rw,nosuid,nodev,noexec,relatime
├─/dev                                        udev                         devtmpfs    rw,nosuid,relatime,size=32820120k,nr_inodes=8205030,mode=755,inode64
│ ├─/dev/pts                                  devpts                       devpts      rw,nosuid,noexec,relatime,gid=5,mode=600,ptmxmode=000
│ ├─/dev/shm                                  tmpfs                        tmpfs       rw,nosuid,nodev,inode64
│ ├─/dev/mqueue                               mqueue                       mqueue      rw,nosuid,nodev,noexec,relatime
│ └─/dev/hugepages                            hugetlbfs                    hugetlbfs   rw,nosuid,nodev,relatime,pagesize=2M
├─/run                                        tmpfs                        tmpfs       rw,nosuid,nodev,noexec,relatime,size=6571568k,mode=755,inode64
│ ├─/run/lock                                 tmpfs                        tmpfs       rw,nosuid,nodev,noexec,relatime,size=5120k,inode64
│ ├─/run/credentials/systemd-journald.service tmpfs                        tmpfs       ro,nosuid,nodev,noexec,relatime,nosymfollow,size=1024k,nr_inodes=1024,mode=700,inode64,noswap
│ ├─/run/rpc_pipefs                           sunrpc                       rpc_pipefs  rw,relatime
│ ├─/run/credentials/getty@tty1.service       tmpfs                        tmpfs       ro,nosuid,nodev,noexec,relatime,nosymfollow,size=1024k,nr_inodes=1024,mode=700,inode64,noswap
│ └─/run/user/0                               tmpfs                        tmpfs       rw,nosuid,nodev,relatime,size=6571564k,nr_inodes=1642891,mode=700,inode64
├─/tmp                                        tmpfs                        tmpfs       rw,nosuid,nodev,nr_inodes=1048576,inode64
├─/rpool                                      rpool                        zfs         rw,noatime,xattr,noacl,casesensitive
│ ├─/rpool/ROOT                               rpool/ROOT                   zfs         rw,noatime,xattr,noacl,casesensitive
│ └─/rpool/data                               rpool/data                   zfs         rw,noatime,xattr,noacl,casesensitive
│   ├─/rpool/data/subvol-101-disk-0           rpool/data/subvol-101-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-107-disk-0           rpool/data/subvol-107-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-114-disk-0           rpool/data/subvol-114-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-121-disk-0           rpool/data/subvol-121-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-102-disk-0           rpool/data/subvol-102-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-115-disk-0           rpool/data/subvol-115-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-104-disk-0           rpool/data/subvol-104-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-106-disk-0           rpool/data/subvol-106-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-109-disk-0           rpool/data/subvol-109-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-120-disk-0           rpool/data/subvol-120-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-119-disk-0           rpool/data/subvol-119-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-103-disk-0           rpool/data/subvol-103-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-118-disk-0           rpool/data/subvol-118-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-117-disk-0           rpool/data/subvol-117-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-111-disk-0           rpool/data/subvol-111-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-113-disk-0           rpool/data/subvol-113-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-110-disk-0           rpool/data/subvol-110-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-108-disk-0           rpool/data/subvol-108-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-112-disk-0           rpool/data/subvol-112-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-122-disk-0           rpool/data/subvol-122-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   ├─/rpool/data/subvol-123-disk-0           rpool/data/subvol-123-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
│   └─/rpool/data/subvol-124-disk-0           rpool/data/subvol-124-disk-0 zfs         rw,noatime,xattr,posixacl,casesensitive
├─/etc/pve                                    /dev/fuse                    fuse        rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other
├─/mnt                                        /etc/auto.nfs                autofs      rw,relatime,fd=7,pgrp=2048,timeout=60,minproto=5,maxproto=5,indirect,pipe_ino=37497
└─/var/lib/lxcfs                              lxcfs                        fuse.lxcfs  rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other

Code:
root@pve:~# ls -l /
total 55
lrwxrwxrwx    1 root root    7 Nov 25  2020 bin -> usr/bin
drwxr-xr-x    5 root root   17 Sep 12 04:12 boot
drwxr-xr-x   19 root root 4820 Sep 12 06:28 dev
drwxr-xr-x  114 root root  224 Sep 12 04:12 etc
drwxr-xr-x    2 root root    2 Sep 19  2020 home
lrwxrwxrwx    1 root root    7 Nov 25  2020 lib -> usr/lib
lrwxrwxrwx    1 root root    9 Nov 25  2020 lib64 -> usr/lib64
drwxr-xr-x    2 root root    2 Nov 25  2020 media
drwxr-xr-x    4 root root    0 Sep 12 04:17 mnt
drwxr-xr-x    6 root root    7 Jul 12 00:17 opt
dr-xr-xr-x 1255 root root    0 Sep 12 04:17 proc
drwx------   10 root root   23 Sep 12 06:53 root
drwxr-xr-x    4 root root    4 Dec 17  2020 rpool
drwxr-xr-x   35 root root 1480 Sep 12 09:17 run
lrwxrwxrwx    1 root root    8 Nov 25  2020 sbin -> usr/sbin
drwxr-xr-x    2 root root    2 Nov 25  2020 srv
dr-xr-xr-x   13 root root    0 Sep 12 04:17 sys
drwxrwxrwt   12 root root  240 Sep 12 08:31 tmp
drwxr-xr-x   12 root root   12 Mar  3  2024 usr
drwxr-xr-x   11 root root   14 Jul 23 12:09 var

Code:
root@pve:~# ls -l /mnt
total 0
drwxr-xr-x 2 root root 0 Sep 12 04:17 data
 
I've been reading that autofs may have something to do with it, even though i'm not using autofs at all for backups? Backup server is installed directly on Unraid as a VM
 
I've been reading that autofs may have something to do with it, even though i'm not using autofs at all for backups? Backup server is installed directly on Unraid as a VM
What is the mount point defined in /etc/auto.master ? If it's /mnt, it may be what's causing your issue.
 
Last edited:
What is the mount point defined in /etc/auto.master ? If it's /mnt, it may be what's causing your issue.
Only thing I have under /mnt is the 2 NFS mounts I use for mount points to the containers, etc.

The proxmox backup server directory on Unraid, I don't even mount anywhere as NFS (shouldn't even be configured to do so).

Code:
root@pve:~# ls /mnt
data  private
root@pve:~# ls /mnt/data
downloads  immich  media  nextcloud  scripts
 

Attachments

  • CleanShot 2025-09-12 at 16.41.19@2x.png
    CleanShot 2025-09-12 at 16.41.19@2x.png
    29.9 KB · Views: 1
Last edited:
I've purposely ran 'chmod -R 777' on the directory to make sure it wasn't the issue but as you can see... VM's have no issue but containers do....
 

Attachments

  • CleanShot 2025-09-12 at 16.43.08@2x.png
    CleanShot 2025-09-12 at 16.43.08@2x.png
    60 KB · Views: 2
  • CleanShot 2025-09-12 at 16.42.58@2x.png
    CleanShot 2025-09-12 at 16.42.58@2x.png
    72.7 KB · Views: 2
  • CleanShot 2025-09-12 at 16.42.46@2x.png
    CleanShot 2025-09-12 at 16.42.46@2x.png
    91.9 KB · Views: 1
  • CleanShot 2025-09-12 at 16.46.12@2x.png
    CleanShot 2025-09-12 at 16.46.12@2x.png
    73.7 KB · Views: 1
Only thing I have under /mnt is the 2 NFS mounts I use for mount points to the containers, etc.

The proxmox backup server directory on Unraid, I don't even mount anywhere as NFS (shouldn't even be configured to do so).

Code:
root@pve:~# ls /mnt
data  private
root@pve:~# ls /mnt/data
downloads  immich  media  nextcloud  scripts

I've purposely ran 'chmod -R 777' on the directory to make sure it wasn't the issue but as you can see... VM's have no issue but containers do....

/mnt/vzsnap0 gets created by that perl script on the host and I think it's only used when backing up containers. Here's my /mnt:

Code:
@lobster:~$ ls -l /mnt
total 12
drwxr-xr-x 4 root root 4096 Apr 11  2023 pve
drwxr-xr-x 2 root root 4096 Jan  6  2023 USB
drwxr-xr-x 2 root root 4096 Apr 11  2023 vzsnap0

The perl script then uses rsync to the proxmox backup server directory. Your problem is on the host.

Phil
 
Last edited:
So I just discovered it defiently has to do with autofs on the proxmox host! If I run 'systemctl stop autofs'... I can backup my containers successfully. As soon as I start it back, it fails (screenshot).

I was using autofs to ensure my NFS mounts were mounted on the host and then using mp0: to mount them to the containers. Is there a better alternative to retry the nfs connection on host if it goes down?

Also allowing me to use backups?
 

Attachments

  • CleanShot 2025-09-12 at 19.25.30@2x.png
    CleanShot 2025-09-12 at 19.25.30@2x.png
    58.5 KB · Views: 2
So I just discovered it defiently has to do with autofs on the proxmox host! If I run 'systemctl stop autofs'... I can backup my containers successfully. As soon as I start it back, it fails (screenshot).

I was using autofs to ensure my NFS mounts were mounted on the host and then using mp0: to mount them to the containers. Is there a better alternative to retry the nfs connection on host if it goes down?

Also allowing me to use backups?
Can you post the contents of /etc/auto.master (redact as needed) ?

Looks like autofs is configured to mount your NFS mounts directly on /mnt.

Phil
 
Can you post the contents of /etc/auto.master (redact as needed) ?

Looks like autofs is configured to mount your NFS mounts directly on /mnt.

Phil
Code:
#
#/misc  /etc/auto.misc
#
# NOTE: mounts done from a hosts map will be mounted with the
#       "nosuid" and "nodev" options unless the "suid" and "dev"
#       options are explicitly given.
#
#/net   -hosts
#
# Include /etc/auto.master.d/*.autofs
# To add an extra map using this mechanism you will need to add
# two configuration items - one /etc/auto.master.d/extra.autofs file
# (using the same line format as the auto.master file)
# and a separate mount map (e.g. /etc/auto.extra or an auto.extra NIS map)
# that is referred to by the extra.autofs file.
#
+dir:/etc/auto.master.d
#
# If you have fedfs set up and the related binaries, either
# built as part of autofs or installed from another package,
# uncomment this line to use the fedfs program map to access
# your fedfs mounts.
#/nfs4  /usr/sbin/fedfs-map-nfs4 nobind
#
# Include central master map if it can be found using
# nsswitch sources.
#
# Note that if there are entries for /net or /misc (as
# above) in the included master map any keys that are the
# same will not be seen as the first read key seen takes
# precedence.
#
+auto.master

/mnt /etc/auto.nfs --ghost --timeout=60

This is /etc/auto.nfs:
Code:
#
# This is an automounter map and it has the following format
# key [ -mount-options-separated-by-comma ] location
# Details may be found in the autofs(5) manpage

data        -fstype=nfs4,rw        10.0.40.2:/mnt/user/data
private        -fstype=nfs4,rw         10.0.40.2:/mnt/user/private

I mean it is mounted to /mnt/data and /mnt/private but my backup isn't - does that matter?