vzdump - Permission denied

Matus

Active Member
Mar 31, 2017
28
0
41
65
Hellou,

when I try to backup unprivileged container on zfs I get this error messages:

Code:
>vzdump 400  --storage syno10
INFO: starting new backup job: vzdump 400 --storage syno10
INFO: Starting Backup of VM 400 (lxc)
INFO: Backup started at 2023-06-15 11:54:24
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: syslog
INFO: including mount point rootfs ('/') in backup
INFO: creating vzdump archive '/mnt/pve/syno10/dump/vzdump-lxc-400-2023_06_15-11_54_24.tar'
INFO: tar: ./etc/ssh/ssh_host_dsa_key: Cannot open: Permission denied
INFO: tar: ./etc/ssh/ssh_host_rsa_key: Cannot open: Permission denied
INFO: tar: ./etc/ssh/ssh_host_ecdsa_key: Cannot open: Permission denied
INFO: tar: ./etc/ssh/ssh_host_ed25519_key: Cannot open: Permission denied
INFO: tar: ./etc/shadow-: Cannot open: Permission denied
INFO: tar: ./etc/gshadow-: Cannot open: Permission denied
INFO: tar: ./etc/security/opasswd: Cannot open: Permission denied
INFO: tar: ./etc/ssl/private: Cannot open: Permission denied
INFO: tar: ./etc/shadow: Cannot open: Permission denied
...
INFO: tar: ./var/log/user.log.2.gz: Cannot open: Permission denied
INFO: Total bytes written: 1823569920 (1.7GiB, 74MiB/s)
INFO: tar: Exiting with failure status due to previous errors
ERROR: Backup of VM 400 failed - command 'set -o pipefail && lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar cpf - --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' --one-file-system '--warning=no-file-ignored' '--directory=/mnt/pve/syno10/dump/vzdump-lxc-400-2023_06_15-11_54_24.tmp' ./etc/vzdump/pct.conf ./etc/vzdump/pct.fw '--directory=/mnt/vzsnap0' --no-anchored '--exclude=lost+found' --anchored '--exclude=./tmp/?*' '--exclude=./var/tmp/?*' '--exclude=./var/run/?*.pid' ./ >/mnt/pve/syno10/dump/vzdump-lxc-400-2023_06_15-11_54_24.dat' failed: exit code 2
INFO: Failed at 2023-06-15 11:54:55
INFO: Backup job finished with errors
job errors

Could you help me with this problem?
Thanks
 
Hi, apparently the listed files in the container filesystem (/etc/ssh/ssh_host_dsa_key, ...) are not readable for uid/gid 100000/100000. You can check the filesystem permissions by mounting and inspecting the container filesystem on the host, e.g. by running (replacing VMID with the container ID):
Code:
pct mount VMID
ls -al /var/lib/lxc/VMID/rootfs/etc/ssh/ssh_host_dsa_key

Could you post the output of this command, along with your container config (the output of pct config VMID)?
Is it possible that the unprivileged option was accidentally modified for that container?
 
Hi, apparently the listed files in the container filesystem (/etc/ssh/ssh_host_dsa_key, ...) are not readable for uid/gid 100000/100000. You can check the filesystem permissions by mounting and inspecting the container filesystem on the host, e.g. by running (replacing VMID with the container ID):
Code:
pct mount VMID
ls -al /var/lib/lxc/VMID/rootfs/etc/ssh/ssh_host_dsa_key

Could you post the output of this command, along with your container config (the output of pct config VMID)?
Is it possible that the unprivileged option was accidentally modified for that container?
Files are readable for root:root
Code:
/dpool/ROOT/subvol-400-disk-0# l
total 90
drwxr-xr-x  2 root root 121 May 13  2022 bin
drwxr-xr-x  2 root root   2 Nov 22  2020 boot
......
drwxr-xr-x 11 root root  13 Dec 11  2020 var

I am not sure with your last.

here is output:
Code:
ls -al /var/lib/lxc/400/rootfs/etc/ssh/ssh_host_dsa_key
-rw------- 1 root root 1385 Feb 23  2021 /var/lib/lxc/400/rootfs/etc/ssh/ssh_host_dsa_key

and
Code:
>pct config 400
arch: amd64
cores: 1
hostname: syslog
lock: mounted
memory: 1024
net0: name=eth0,bridge=vmbr2,firewall=1,hwaddr=2E:4F:8C:35:46:9E,ip=10.10.10.40/24,type=veth
net1: name=eth1,bridge=vmbr0,firewall=1,gw=10.10.2.1,hwaddr=7E:F1:EF:0E:A3:4D,ip=10.10.2.126/24,type=veth
onboot: 1
ostype: debian
rootfs: zfs-containers2:subvol-400-disk-0,size=10G
swap: 512
unprivileged: 1

after
pct mount 400
Code:
>vzdump 400  --storage syno10
INFO: starting new backup job: vzdump 400 --storage syno10
INFO: Starting Backup of VM 400 (lxc)
INFO: Backup started at 2023-06-15 17:22:53
INFO: status = stopped
ERROR: Backup of VM 400 failed - CT is locked (mounted)
INFO: Failed at 2023-06-15 17:22:53
INFO: Backup job finished with errors
job errors
 
Last edited:
Thanks!
Files are readable for root:root
Code:
/dpool/ROOT/subvol-400-disk-0# l
total 90
drwxr-xr-x  2 root root 121 May 13  2022 bin
drwxr-xr-x  2 root root   2 Nov 22  2020 boot
......
drwxr-xr-x 11 root root  13 Dec 11  2020 var

I am not sure with your last.

here is output:
Code:
ls -al /var/lib/lxc/400/rootfs/etc/ssh/ssh_host_dsa_key
-rw------- 1 root root 1385 Feb 23  2021 /var/lib/lxc/400/rootfs/etc/ssh/ssh_host_dsa_key
This is the problem: As the container is unprivileged, the owner of those files should not be root, but 100000:100000 (this is the uid/gid of the container root user as it is seen on the host).

The question is why these files are not owned by 100000:100000. One way this could happen is if you created the container as a privileged container and, after creation, manually added the unprivileged: 1 option. This is why I asked whether the unprivileged option had been modified -- maybe in an attempt to convert this container to an unprivileged container?

Could you please also post the output of pveversion -v?
 
Thanks!

This is the problem: As the container is unprivileged, the owner of those files should not be root, but 100000:100000 (this is the uid/gid of the container root user as it is seen on the host).

The question is why these files are not owned by 100000:100000. One way this could happen is if you created the container as a privileged container and, after creation, manually added the unprivileged: 1 option. This is why I asked whether the unprivileged option had been modified -- maybe in an attempt to convert this container to an unprivileged container?

Could you please also post the output of pveversion -v?
Thank you for your help,
when I made chown 100000:100000 on whole container filesystem it starting work properly.
I must have made a mistake sometime in the past.

Regards
Matus
 
Hi all

I'm having same issue while backuping unprivileged containers

after doing the pct mount VMID, I can see that the owner is 100000:100000

Code:
root@pve:~# pct mount 104
mounted CT 104 in '/var/lib/lxc/104/rootfs'
root@pve:~# ls -al /var/lib/lxc/104/rootfs/etc/ssh/ssh_host_dsa_key
-rw------- 1 100000 100000 1381 Jun 21 09:50 /var/lib/lxc/104/rootfs/etc/ssh/ssh_host_dsa_key

And this is the result code after backup
Code:
INFO: starting new backup job: vzdump 104 --compress zstd --remove 0 --storage NFS-QNAP-Proxmox-Backup --mode snapshot --node pve --notes-template '{{guestname}}'
INFO: Starting Backup of VM 104 (lxc)
INFO: Backup started at 2023-06-27 09:08:40
INFO: status = running
INFO: CT Name: uptimekuma
INFO: including mount point rootfs ('/') in backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot 'vzdump'
Logical volume "snap_vm-104-disk-0_vzdump" created.
INFO: creating vzdump archive '/mnt/pve/NFS-QNAP-Proxmox-Backup/dump/vzdump-lxc-104-2023_06_27-09_08_40.tar.zst'
INFO: tar: /mnt/pve/NFS-QNAP-Proxmox-Backup/dump/vzdump-lxc-104-2023_06_27-09_08_40.tmp: Cannot open: Permission denied
INFO: tar: Error is not recoverable: exiting now
INFO: cleanup temporary 'vzdump' snapshot
Logical volume "snap_vm-104-disk-0_vzdump" successfully removed
ERROR: Backup of VM 104 failed - command 'set -o pipefail && lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar cpf - --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' --one-file-system '--warning=no-file-ignored' '--directory=/mnt/pve/NFS-QNAP-Proxmox-Backup/dump/vzdump-lxc-104-2023_06_27-09_08_40.tmp' ./etc/vzdump/pct.conf ./etc/vzdump/pct.fw '--directory=/mnt/vzsnap0' --no-anchored '--exclude=lost+found' --anchored '--exclude=./tmp/?*' '--exclude=./var/tmp/?*' '--exclude=./var/run/?*.pid' ./ | zstd --rsyncable '--threads=1' >/mnt/pve/NFS-QNAP-Proxmox-Backup/dump/vzdump-lxc-104-2023_06_27-09_08_40.tar.dat' failed: exit code 2
INFO: Failed at 2023-06-27 09:08:41
INFO: Backup job finished with errors
TASK ERROR: job errors

I can do a backup of Privileged containers on the same backup location

This is the pveversion -v result

Code:
root@pve:~# pveversion -v
proxmox-ve: 7.4-1 (running kernel: 6.2.11-2-pve)
pve-manager: 7.4-15 (running version: 7.4-15/a5d2a31e)
pve-kernel-5.15: 7.4-4
pve-kernel-6.2: 7.4-3
pve-kernel-6.2.11-2-pve: 6.2.11-2
pve-kernel-5.15.108-1-pve: 5.15.108-1
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.107-1-pve: 5.15.107-1
pve-kernel-5.15.104-1-pve: 5.15.104-2
pve-kernel-5.15.102-1-pve: 5.15.102-1
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.30-2-pve: 5.15.30-3
ceph-fuse: 15.2.16-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx4
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4.1
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.4-2
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.7
libpve-storage-perl: 7.4-3
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
proxmox-backup-client: 2.4.2-1
proxmox-backup-file-restore: 2.4.2-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.2
proxmox-widget-toolkit: 3.7.3
pve-cluster: 7.3-3
pve-container: 4.4-6
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-4~bpo11+1
pve-firewall: 4.3-4
pve-firmware: 3.6-5
pve-ha-manager: 3.6.1
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-2
qemu-server: 7.4-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.11-pve1


What else can I check?

Thanks
 
Hi,
I'm having same issue while backuping unprivileged containers

after doing the pct mount VMID, I can see that the owner is 100000:100000

Code:
INFO: tar: /mnt/pve/NFS-QNAP-Proxmox-Backup/dump/vzdump-lxc-104-2023_06_27-09_08_40.tmp: Cannot open: Permission denied

I can do a backup of Privileged containers on the same backup location
you need to ensure that the UID=100000 user is allowed to write to the NFS share.
 
  • Like
Reactions: Vittorio
Hi,

you need to ensure that the UID=100000 user is allowed to write to the NFS share.
Thanks Fiona

This should be done inside Proxmox or on the NAS (in my case) where is located the shared folder?

I see that the command during backup is

ERROR: Backup of VM 116 failed - command 'set -o pipefail && lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar cpf - --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' --one-file-system '--warning=no-file-ignored' '--directory=/mnt/pve/NFS-QNAP-Proxmox-Backup/dump/vzdump-lxc-116-2023_06_27-04_47_06.tmp' ./etc/vzdump/pct.conf ./etc/vzdump/pct.fw '--directory=/mnt/vzsnap0' --no-anchored '--exclude=lost+found' --anchored '--exclude=./tmp/?*' '--exclude=./var/tmp/?*' '--exclude=./var/run/?*.pid' ./ | zstd --rsyncable '--threads=1' >/mnt/pve/NFS-QNAP-Proxmox-Backup/dump/vzdump-lxc-116-2023_06_27-04_47_06.tar.dat' failed: exit code 2
 
Last edited:
Hi,

you need to ensure that the UID=100000 user is allowed to write to the NFS share.
I've tried also to enable SQUASH (ROOT or ALL) on the NAS on the shared folder NFS configuration, but I get new error

INFO: starting new backup job: vzdump 104 --compress zstd --remove 0 --storage NFS-QNAP-Proxmox-Backup --mode snapshot --node pve --notes-template '{{guestname}}' ERROR: Backup of VM 104 failed - unable to create temporary directory '/mnt/pve/NFS-QNAP-Proxmox-Backup/dump/vzdump-lxc-104-2023_06_28-10_40_45.tmp' at /usr/share/perl5/PVE/VZDump.pm line 930. INFO: Failed at 2023-06-28 10:40:45 INFO: Backup job finished with errors TASK ERROR: job errors

what else can I check?

Thanks
 
I've tried also to enable SQUASH (ROOT or ALL) on the NAS on the shared folder NFS configuration, but I get new error
If you do that, the root user (or all users) on the client side will be mapped to the anonymous user on the server side. But apparently the anonymous user doesn't have permissions. If you don't squash root, root will be root on the server side and be able to create the temporary directory. See the User ID Mapping section of https://linux.die.net/man/5/exports

I never used QNAP, so I don't know what needs to be configured there.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!