Backup of CT failed with 'permission denied' but VMs can backup without issue

ingolo

New Member
May 25, 2022
18
1
3
I have 2 proxmox servers that backup to a Synology server NAS via NFS I am getting this error when trying to backup any CT's:
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
INFO: creating vzdump archive '/mnt/pve/backup-syn-nfs/dump/vzdump-lxc-102-2022_06_19-01_10_01.tar.zst'
INFO: tar: /mnt/pve/backup-syn-nfs/dump/vzdump-lxc-102-2022_06_19-01_10_01.tmp: Cannot open: Permission denied
INFO: tar: Error is not recoverable: exiting now
But if I look at the other backups (VMs) they have no issues....
INFO: starting new backup job: vzdump --notes-template '{{node}} - {{vmid}} - {{guestname}}' --storage backup-syn-nfs --compress zstd --mode snapshot --mailnotification always --prune-backups 'keep-daily=7,keep-last=10,keep-monthly=3' --quiet 1 --all 1
INFO: Starting Backup of VM 100 (qemu)
INFO: Backup started at 2022-06-19 01:00:02
INFO: status = running
INFO: VM Name: monitor
INFO: include disk 'scsi0' 'SSD_ZFS_PVE1:vm-100-disk-0' 64G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: pending configuration changes found (not included into backup)
INFO: creating vzdump archive '/mnt/pve/backup-syn-nfs/dump/vzdump-qemu-100-2022_06_19-01_00_02.vma.zst'
INFO: issuing guest-agent 'fs-freeze' command
INFO: issuing guest-agent 'fs-thaw' command
INFO: started backup task '350560a2-de72-443d-a853-f86fa867e7fc'
INFO: resuming VM again
INFO: 0% (348.0 MiB of 64.0 GiB) in 3s, read: 116.0 MiB/s, write: 67.8 MiB/s
INFO: 2% (1.5 GiB of 64.0 GiB) in 6s, read: 400.0 MiB/s, write: 94.7 MiB/s
INFO: 3% (1.9 GiB of 64.0 GiB) in 9s, read: 142.6 MiB/s, write: 97.8 MiB/s
INFO: 4% (2.7 GiB of 64.0 GiB) in 16s, read: 116.5 MiB/s, write: 94.5 MiB/s
INFO: 5% (3.2 GiB of 64.0 GiB) in 19s, read: 166.8 MiB/s, write: 94.3 MiB/s
INFO: 6% (3.9 GiB of 64.0 GiB) in 26s, read: 96.3 MiB/s, write: 71.3 MiB/s
INFO: 8% (5.6 GiB of 64.0 GiB) in 29s, read: 576.4 MiB/s, write: 42.3 MiB/s
INFO: 11% (7.3 GiB of 64.0 GiB) in 33s, read: 435.9 MiB/s, write: 35.1 MiB/s
INFO: 13% (8.4 GiB of 64.0 GiB) in 36s, read: 383.0 MiB/s, write: 54.3 MiB/s
INFO: 14% (9.0 GiB of 64.0 GiB) in 44s, read: 82.0 MiB/s, write: 74.7 MiB/s
INFO: 15% (9.6 GiB of 64.0 GiB) in 52s, read: 75.2 MiB/s, write: 68.4 MiB/s
INFO: 16% (10.2 GiB of 64.0 GiB) in 59s, read: 91.7 MiB/s, write: 74.3 MiB/s
INFO: 18% (11.8 GiB of 64.0 GiB) in 1m 5s, read: 258.6 MiB/s, write: 38.0 MiB/s
INFO: 19% (12.2 GiB of 64.0 GiB) in 1m 9s, read: 108.8 MiB/s, write: 78.6 MiB/s
INFO: 21% (13.8 GiB of 64.0 GiB) in 1m 12s, read: 563.1 MiB/s, write: 44.7 MiB/s
INFO: 23% (15.1 GiB of 64.0 GiB) in 1m 16s, read: 321.7 MiB/s, write: 62.0 MiB/s
INFO: 24% (15.9 GiB of 64.0 GiB) in 1m 19s, read: 269.5 MiB/s, write: 75.4 MiB/s
INFO: 27% (17.7 GiB of 64.0 GiB) in 1m 22s, read: 635.6 MiB/s, write: 46.8 MiB/s
INFO: 30% (19.3 GiB of 64.0 GiB) in 1m 25s, read: 538.9 MiB/s, write: 24.6 MiB/s
INFO: 33% (21.7 GiB of 64.0 GiB) in 1m 28s, read: 809.6 MiB/s, write: 32.8 MiB/s
INFO: 34% (21.9 GiB of 64.0 GiB) in 1m 31s, read: 64.0 MiB/s, write: 60.9 MiB/s
INFO: 35% (22.5 GiB of 64.0 GiB) in 1m 38s, read: 88.0 MiB/s, write: 82.3 MiB/s
INFO: 36% (23.1 GiB of 64.0 GiB) in 1m 45s, read: 93.2 MiB/s, write: 91.4 MiB/s
INFO: 37% (23.8 GiB of 64.0 GiB) in 1m 52s, read: 97.4 MiB/s, write: 79.5 MiB/s
INFO: 38% (24.4 GiB of 64.0 GiB) in 1m 59s, read: 87.0 MiB/s, write: 84.1 MiB/s
INFO: 39% (25.0 GiB of 64.0 GiB) in 2m 7s, read: 85.6 MiB/s, write: 84.3 MiB/s
INFO: 40% (25.6 GiB of 64.0 GiB) in 2m 14s, read: 86.0 MiB/s, write: 68.1 MiB/s
INFO: 41% (26.3 GiB of 64.0 GiB) in 2m 21s, read: 103.1 MiB/s, write: 99.0 MiB/s
INFO: 42% (27.1 GiB of 64.0 GiB) in 2m 25s, read: 201.2 MiB/s, write: 45.1 MiB/s
INFO: 43% (27.8 GiB of 64.0 GiB) in 2m 28s, read: 233.2 MiB/s, write: 62.1 MiB/s
INFO: 48% (31.2 GiB of 64.0 GiB) in 2m 32s, read: 857.3 MiB/s, write: 25.7 MiB/s
INFO: 55% (35.7 GiB of 64.0 GiB) in 2m 36s, read: 1.1 GiB/s, write: 7.0 KiB/s
INFO: 60% (38.9 GiB of 64.0 GiB) in 2m 39s, read: 1.1 GiB/s, write: 0 B/s
INFO: 65% (42.0 GiB of 64.0 GiB) in 2m 42s, read: 1.0 GiB/s, write: 0 B/s
INFO: 70% (45.4 GiB of 64.0 GiB) in 2m 45s, read: 1.1 GiB/s, write: 0 B/s
INFO: 75% (48.5 GiB of 64.0 GiB) in 2m 48s, read: 1.0 GiB/s, write: 0 B/s
INFO: 83% (53.3 GiB of 64.0 GiB) in 2m 52s, read: 1.2 GiB/s, write: 0 B/s
INFO: 88% (57.0 GiB of 64.0 GiB) in 2m 55s, read: 1.2 GiB/s, write: 0 B/s
INFO: 93% (60.1 GiB of 64.0 GiB) in 2m 58s, read: 1.0 GiB/s, write: 0 B/s
INFO: 98% (63.1 GiB of 64.0 GiB) in 3m 1s, read: 1.0 GiB/s, write: 0 B/s
INFO: 100% (64.0 GiB of 64.0 GiB) in 3m 2s, read: 880.5 MiB/s, write: 8.0 KiB/s
INFO: backup is sparse: 53.61 GiB (83%) total zero data
INFO: transferred 64.00 GiB in 182 seconds (360.1 MiB/s)
INFO: archive file size: 3.21GB
INFO: adding notes to backup
INFO: prune older backups with retention: keep-daily=7, keep-last=10, keep-monthly=3
INFO: pruned 0 backup(s)
INFO: Finished Backup of VM 100 (00:03:03)
INFO: Backup finished at 2022-06-19 01:03:05
Not sure what I am missing at this moment, anything that points me in the right direction would be greatly helpful, thanks.

[Edit - More/Extra Info]
Looking at the folder via proxmox (I am able to make changes and create files in this directory)
root@pve1:/mnt/pve/backup-syn-nfs# ls -lhsa
total 4.0K

0 drwxrwxrwx 1 root root 94 Jun 6 10:28 .
4.0K drwxr-xr-x 3 root root 4.0K Jun 6 10:28 ..
0 drwxrwxrwx 1 root root 4.3K Jun 19 01:14 dump
0 drwxrwxrwx 1 root root 0 Jun 6 10:28 images
0 drwxrwxrwx 1 root root 0 Jun 6 10:28 private
0 drwxrwxrwx 1 root root 22 Jun 6 10:26 '#recycle'
0 drwxrwxrwx 1 root root 0 Jun 6 10:28 snippets
0 drwxrwxrwx 1 root root 16 Jun 6 10:28 template
root@pve1:/mnt/pve/backup-syn-nfs/dump# ls -lhsa
total 82G
0 drwxrwxrwx 1 root root 4.3K Jun 19 01:14 .
0 drwxrwxrwx 1 root root 94 Jun 6 10:28 ..
4.0K -rw-r--r-- 1 root root 1.6K Jun 6 10:28 vzdump-lxc-102-2022_06_06-10_28_51.log
4.0K -rw-r--r-- 1 root root 1.6K Jun 6 10:51 vzdump-lxc-102-2022_06_06-10_51_36.log
4.0K -rw-r--r-- 1 root root 2.2K Jun 6 11:44 vzdump-lxc-102-2022_06_06-10_51_56.log
4.0K -rw-r--r-- 1 root root 1.6K Jun 9 22:13 vzdump-lxc-102-2022_06_09-22_13_42.log
4.0K -rw-r--r-- 1 root root 1.6K Jun 12 01:09 vzdump-lxc-102-2022_06_12-01_09_53.log
4.0K -rw-r--r-- 1 root root 1.6K Jun 19 01:10 vzdump-lxc-102-2022_06_19-01_10_01.log
4.0K -rw-r--r-- 1 root root 1.6K Jun 6 22:26 vzdump-lxc-104-2022_06_06-22_26_05.log
4.0K -rw-r--r-- 1 root root 1.6K Jun 9 22:16 vzdump-lxc-104-2022_06_09-22_16_02.log
4.0K -rw-r--r-- 1 root root 1.6K Jun 12 01:12 vzdump-lxc-104-2022_06_12-01_12_05.log
4.0K -rw-r--r-- 1 root root 1.6K Jun 19 01:12 vzdump-lxc-104-2022_06_19-01_12_17.log
4.0K -rw-r--r-- 1 root root 1.6K Jun 9 22:16 vzdump-lxc-105-2022_06_09-22_16_03.log
4.0K -rw-r--r-- 1 root root 1.6K Jun 12 01:12 vzdump-lxc-105-2022_06_12-01_12_05.log
4.0K -rw-r--r-- 1 root root 1.6K Jun 19 01:12 vzdump-lxc-105-2022_06_19-01_12_17.log
4.0K -rw-r--r-- 1 root root 1.6K Jun 9 22:16 vzdump-lxc-106-2022_06_09-22_16_03.log
4.0K -rw-r--r-- 1 root root 1.6K Jun 12 01:12 vzdump-lxc-106-2022_06_12-01_12_06.log
4.0K -rw-r--r-- 1 root root 1.6K Jun 19 01:12 vzdump-lxc-106-2022_06_19-01_12_17.log
4.0K -rw-r--r-- 1 root root 3.5K Jun 6 10:33 vzdump-qemu-100-2022_06_06-10_31_11.log
2.4G -rwxrwxrwx 1 root root 2.4G Jun 6 10:33 vzdump-qemu-100-2022_06_06-10_31_11.vma.zst
4.0K -rw-r--r-- 1 root root 18 Jun 6 10:33 vzdump-qemu-100-2022_06_06-10_31_11.vma.zst.notes
8.0K -rw-r--r-- 1 root root 4.9K Jun 9 22:06 vzdump-qemu-100-2022_06_09-22_04_01.log
2.7G -rwxrwxrwx 1 root root 2.7G Jun 9 22:06 vzdump-qemu-100-2022_06_09-22_04_01.vma.zst
4.0K -rw-r--r-- 1 root root 31 Jun 9 22:06 vzdump-qemu-100-2022_06_09-22_04_01.vma.zst.notes
8.0K -rw-r--r-- 1 root root 5.1K Jun 12 01:02 vzdump-qemu-100-2022_06_12-01_00_02.log
2.9G -rwxrwxrwx 1 root root 2.9G Jun 12 01:02 vzdump-qemu-100-2022_06_12-01_00_02.vma.zst
4.0K -rw-r--r-- 1 root root 31 Jun 12 01:02 vzdump-qemu-100-2022_06_12-01_00_02.vma.zst.notes
8.0K -rw-r--r-- 1 root root 5.3K Jun 19 01:03 vzdump-qemu-100-2022_06_19-01_00_02.log
3.3G -rwxrwxrwx 1 root root 3.3G Jun 19 01:03 vzdump-qemu-100-2022_06_19-01_00_02.vma.zst
4.0K -rw-r--r-- 1 root root 31 Jun 19 01:03 vzdump-qemu-100-2022_06_19-01_00_02.vma.zst.notes
8.0K -rw-r--r-- 1 root root 7.0K Jun 6 10:40 vzdump-qemu-101-2022_06_06-10_33_53.log
17G -rwxrwxrwx 1 root root 17G Jun 6 10:40 vzdump-qemu-101-2022_06_06-10_33_53.vma.zst
4.0K -rw-r--r-- 1 root root 17 Jun 6 10:40 vzdump-qemu-101-2022_06_06-10_33_53.vma.zst.notes
8.0K -rw-r--r-- 1 root root 7.4K Jun 9 22:13 vzdump-qemu-101-2022_06_09-22_06_31.log
17G -rwxrwxrwx 1 root root 17G Jun 9 22:13 vzdump-qemu-101-2022_06_09-22_06_31.vma.zst
4.0K -rw-r--r-- 1 root root 30 Jun 9 22:13 vzdump-qemu-101-2022_06_09-22_06_31.vma.zst.notes
8.0K -rw-r--r-- 1 root root 7.3K Jun 12 01:09 vzdump-qemu-101-2022_06_12-01_02_42.log
17G -rwxrwxrwx 1 root root 17G Jun 12 01:09 vzdump-qemu-101-2022_06_12-01_02_42.vma.zst
4.0K -rw-r--r-- 1 root root 30 Jun 12 01:09 vzdump-qemu-101-2022_06_12-01_02_42.vma.zst.notes
8.0K -rw-r--r-- 1 root root 7.3K Jun 19 01:10 vzdump-qemu-101-2022_06_19-01_03_05.log
17G -rwxrwxrwx 1 root root 17G Jun 19 01:10 vzdump-qemu-101-2022_06_19-01_03_05.vma.zst
4.0K -rw-r--r-- 1 root root 30 Jun 19 01:10 vzdump-qemu-101-2022_06_19-01_03_05.vma.zst.notes
8.0K -rw-r--r-- 1 root root 5.0K Jun 9 22:16 vzdump-qemu-103-2022_06_09-22_13_44.log
1.5G -rwxrwxrwx 1 root root 1.5G Jun 9 22:16 vzdump-qemu-103-2022_06_09-22_13_44.vma.zst
4.0K -rw-r--r-- 1 root root 32 Jun 9 22:16 vzdump-qemu-103-2022_06_09-22_13_44.vma.zst.notes
8.0K -rw-r--r-- 1 root root 4.9K Jun 12 01:12 vzdump-qemu-103-2022_06_12-01_09_54.log
1.5G -rwxrwxrwx 1 root root 1.5G Jun 12 01:12 vzdump-qemu-103-2022_06_12-01_09_54.vma.zst
4.0K -rw-r--r-- 1 root root 32 Jun 12 01:12 vzdump-qemu-103-2022_06_12-01_09_54.vma.zst.notes
8.0K -rw-r--r-- 1 root root 5.1K Jun 19 01:12 vzdump-qemu-103-2022_06_19-01_10_02.log
1.5G -rwxrwxrwx 1 root root 1.5G Jun 19 01:12 vzdump-qemu-103-2022_06_19-01_10_02.vma.zst
4.0K -rw-r--r-- 1 root root 32 Jun 19 01:12 vzdump-qemu-103-2022_06_19-01_10_02.vma.zst.notes
8.0K -rw-r--r-- 1 root root 4.8K Jun 19 01:14 vzdump-qemu-107-2022_06_19-01_12_18.log
702M -rwxrwxrwx 1 root root 702M Jun 19 01:14 vzdump-qemu-107-2022_06_19-01_12_18.vma.zst
4.0K -rw-r--r-- 1 root root 22 Jun 19 01:14 vzdump-qemu-107-2022_06_19-01_12_18.vma.zst.notes
This happens on manual and automatic backups, and happens with ALL CT's but not VM's
 
Last edited:
the container backup likely runs as unprivileged user - that also needs to have access to your NFS export..
 
by default, it would be user/group 100000
 
There is no user/group that meets the above....

root@pve1:~# cat /etc/passwd root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin bin:x:2:2:bin:/bin:/usr/sbin/nologin sys:x:3:3:sys:/dev:/usr/sbin/nologin sync:x:4:65534:sync:/bin:/bin/sync games:x:5:60:games:/usr/games:/usr/sbin/nologin man:x:6:12:man:/var/cache/man:/usr/sbin/nologin lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin mail:x:8:8:mail:/var/mail:/usr/sbin/nologin news:x:9:9:news:/var/spool/news:/usr/sbin/nologin uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin proxy:x:13:13:proxy:/bin:/usr/sbin/nologin www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin backup:x:34:34:backup:/var/backups:/usr/sbin/nologin list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin _apt:x:100:65534::/nonexistent:/usr/sbin/nologin _chrony:x:101:105:Chrony daemon,,,:/var/lib/chrony:/usr/sbin/nologin messagebus:x:102:107::/nonexistent:/usr/sbin/nologin _rpc:x:103:65534::/run/rpcbind:/usr/sbin/nologin systemd-network:x:104:109:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin systemd-resolve:x:105:110:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin postfix:x:106:112::/var/spool/postfix:/usr/sbin/nologin tcpdump:x:107:114::/nonexistent:/usr/sbin/nologin sshd:x:108:65534::/run/sshd:/usr/sbin/nologin statd:x:109:65534::/var/lib/nfs:/usr/sbin/nologin gluster:x:110:116::/var/lib/glusterd:/usr/sbin/nologin tss:x:111:117:TPM software stack,,,:/var/lib/tpm:/bin/false ceph:x:64045:64045:Ceph storage service:/var/lib/ceph:/usr/sbin/nologin systemd-timesync:x:999:999:systemd Time Synchronization:/:/usr/sbin/nologin systemd-coredump:x:998:998:systemd Core Dumper:/:/usr/sbin/nologin Debian-snmp:x:112:118::/var/lib/snmp:/bin/false netdata:x:113:119::/var/lib/netdata:/usr/sbin/nologin
 
yeah - that's expected, it's the unprivileged uid from within the container that is not supposed to exist as regular user on the host. you can still set permissions/ownership for it though.
 
make the backup storage writable by the unprivileged user..