unable to restore an unprivileged CT as unprivileged

gomu

New Member
Dec 2, 2020
22
4
3
46
Hello,

I run proxmox VE 5.4 w/ ZFS root on a standalone host. I'm in the process of migrating to a two-node cluster. Both are hosted @OVH.
Once the new setup is ready, I plan to move my existing CTs (no VMs) by backup/restore technique. Please note that the CTs were created in unprivileged mode.

For now I'm in the testing phase and I realized that the backups can't be restored as unprivileged.
The command used is :
pct restore 100 /var/lib//vz/dump/vzdump-lxc-100-2020_11_29-17_38_20.tar.lzo -unprivileged -storage local-zfs
It fails with the following log :
Code:
extracting archive '/var/lib/vz/dump/vzdump-lxc-100-2020_11_29-17_38_20.tar.lzo'
tar: ./var/local: Cannot mkdir: Permission denied
...
tar: ./var/spool/plymouth: Cannot mkdir: No such file or directory
Total bytes read: 2163251200 (2.1GiB, 91MiB/s)
tar: Exiting with failure status due to previous errors
unable to restore CT 100 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- tar xpf - --lzop --totals --one-file-system -p --sparse --numeric-owner --acls --xattrs '--xattrs-include=user.*' '--xattrs-include=security.capability' '--warning=no-file-ignored' '--warning=no-xattr-write' -C /var/lib/lxc/100/rootfs --skip-old-files --anchored --exclude './dev/*'' failed: exit code 2

Using the command :
pct restore 100 /var/lib//vz/dump/vzdump-lxc-100-2020_11_29-17_38_20.tar.lzo -storage local-zfs
works like a charm, but I loose the benefits of keeping the setup as safe as possible.

I made a small research and found similar threads in the past :
The last one show a workaround that is a bit complicated, and that I shrinked to that :
  1. execute restore command in a screen window pct restore 100 /var/lib//vz/dump/vzdump-lxc-100-2020_11_29-17_38_20.tar.lzo -unprivileged -storage local-zfs
  2. In another screen window execute workaround setfacl -R -m user:100000:rwX /rpool/data/subvol-100-disk-0
Notes :
  • you must do it quick, that is execute the setfacl command before the restore gets to the point it tries to restore the /var/ directory
  • you might need to install acl tools apt-get install acl
  • you might need to update the path /rpool/data/subvol-100-disk-0 with the right CT numbers (e.g. 101, 102,...) and the right disk number (0, 1...) for example /rpool/data/subvol-155-disk-17
Is it expected behaviour ? I guess not.
Is it due to using ZFS ? I can't tell.
Is there a better way to fix the issue ?
 
Last edited:
hi,

pct restore 100 /var/lib//vz/dump/vzdump-lxc-100-2020_11_29-17_38_20.tar.lzo -unprivileged -storage local-zfs
can you try --unprivileged 1 ?
 
For now I'm in the testing phase and I realized that the backups can't be restored as unprivileged.
restored where? same node or different machine? if different please post pveversion -v

can it be restored normally on the same node?

also do you maybe have a custom uid mapping somewhere? can you post contents of /etc/subuid and /etc/subgid ?

also please the configuration of the container pct config CTID
 
Hello,

Many thanks for your interest.
  • can you try --unprivileged 1 ?
It gives the same result.
I used : pct restore 100 /var/lib/vz/dump/vzdump-lxc-100-2020_11_29-17_38_20.tar.lzo --unprivileged 1 -storage local-zfs
restored where? same node or different machine? if different please post pveversion -v
Same node or different machines -> no difference.
Basically I'm trying to restore on a different machine. But once (by mistake) I launched the restore job on the node that holds the running CT (on the web UI I didn't realize I wasn't on the new machine) and I remember it ended with the same tar permision denied errors. The CT was running though so I wonder how I was authorized to restore the CT over itself... maybe it chose a new CTID automatically ?
  1. current node for the CT (the one where the CT was created as unprivileged)
    Bash:
    root@ns3368367:~# pveversion -v
    proxmox-ve: 5.4-2 (running kernel: 4.15.18-29-pve)
    pve-manager: 5.4-15 (running version: 5.4-15/d0ec33c6)
    pve-kernel-4.15: 5.4-18
    pve-kernel-4.15.18-29-pve: 4.15.18-57
    corosync: 2.4.4-pve1
    criu: 2.11.1-1~bpo90
    glusterfs-client: 3.8.8-1
    ksm-control-daemon: 1.2-2
    libjs-extjs: 6.0.1-2
    libpve-access-control: 5.1-12
    libpve-apiclient-perl: 2.0-5
    libpve-common-perl: 5.0-56
    libpve-guest-common-perl: 2.0-20
    libpve-http-server-perl: 2.0-14
    libpve-storage-perl: 5.0-44
    libqb0: 1.0.3-1~bpo9
    lvm2: 2.02.168-pve6
    lxc-pve: 3.1.0-7
    lxcfs: 3.0.3-pve1
    novnc-pve: 1.0.0-3
    proxmox-widget-toolkit: 1.0-28
    pve-cluster: 5.0-38
    pve-container: 2.0-42
    pve-docs: 5.4-2
    pve-edk2-firmware: 1.20190312-1
    pve-firewall: 3.0-22
    pve-firmware: 2.0-7
    pve-ha-manager: 2.0-9
    pve-i18n: 1.1-4
    pve-libspice-server1: 0.14.1-2
    pve-qemu-kvm: 3.0.1-4
    pve-xtermjs: 3.12.0-1
    qemu-server: 5.0-56
    smartmontools: 6.5+svn4324-1
    spiceterm: 3.0-5
    vncterm: 1.5-3
    zfsutils-linux: 0.7.13-pve1~bpo2
  2. new node where I want to restore the CT on
    Bash:
    root@ns3182923:~# pveversion -vproxmox-ve: 5.4-2 (running kernel: 4.15.18-30-pve)
    pve-manager: 5.4-15 (running version: 5.4-15/d0ec33c6)
    pve-kernel-4.15: 5.4-19
    pve-kernel-4.15.18-30-pve: 4.15.18-58
    pve-kernel-4.15.18-12-pve: 4.15.18-36
    corosync: 2.4.4-pve1
    criu: 2.11.1-1~bpo90
    glusterfs-client: 3.8.8-1
    ksm-control-daemon: 1.2-2
    libjs-extjs: 6.0.1-2
    libpve-access-control: 5.1-12
    libpve-apiclient-perl: 2.0-5
    libpve-common-perl: 5.0-56
    libpve-guest-common-perl: 2.0-20
    libpve-http-server-perl: 2.0-14
    libpve-storage-perl: 5.0-44
    libqb0: 1.0.3-1~bpo9
    lvm2: 2.02.168-pve6
    lxc-pve: 3.1.0-7
    lxcfs: 3.0.3-pve1
    novnc-pve: 1.0.0-3
    proxmox-widget-toolkit: 1.0-28
    pve-cluster: 5.0-38
    pve-container: 2.0-42
    pve-docs: 5.4-2
    pve-edk2-firmware: 1.20190312-1
    pve-firewall: 3.0-22
    pve-firmware: 2.0-7
    pve-ha-manager: 2.0-9
    pve-i18n: 1.1-4
    pve-libspice-server1: 0.14.1-2
    pve-qemu-kvm: 3.0.1-4
    pve-xtermjs: 3.12.0-1
    qemu-server: 5.0-56
    smartmontools: 6.5+svn4324-1
    spiceterm: 3.0-5
    vncterm: 1.5-3
    zfsutils-linux: 0.7.13-pve1~bpo2
can it be restored normally on the same node?
As stated earlier by mistake I tried to do this and it failed the same way.
also do you maybe have a custom uid mapping somewhere? can you post contents of /etc/subuid and /etc/subgid ?
Not that I'm aware of ;-)
Bash:
root@ns3368367:~# cat /etc/subuid
root:100000:65536
root@ns3368367:~# cat /etc/subgid
root:100000:65536

also please the configuration of the container pct config CTID
Bash:
root@ns3368367:~# pct config 100
arch: amd64
cores: 1
hostname: lcoovhdolp002
memory: 2048
mp0: local-zfs:subvol-100-disk-1,mp=/var/lib/dolibarr,backup=1,size=8G
nameserver: 37.187.88.192 213.186.33.99
net0: name=eth0,bridge=vmbr0,gw=37.187.88.254,gw6=2001:41d0:a:30ff:ff:ff:ff:ff,hwaddr=02:00:00:2d:95:91,ip=87.98.183.98/32,ip6=2001:41d0:a:30c0::1/128,type=veth
onboot: 0
ostype: ubuntu
rootfs: local-zfs:subvol-100-disk-0,size=8G
 
after doing the workaround and restoring the container successfully once, can you try to make another backup and see if that one behaves the same?
 
Hello,
Now that this proxmox host has been deprecated, I was able to do some more tests.
  1. Restoring another unprivilegied CT fails in the same way tar: ./var/local: Cannot mkdir: Permission denied, so the issue sounds not linked to this very CT
  2. Applying the same fix setfacl -R -m user:100000:rwX /rpool/data/subvol-101-disk-0 at the right time allows the other CT to be successfully restored too
  3. I made a backup of the first CT after it was restored successfully with the setfacl fix; this "2nd generation" backup needs the same fix to be restored successfully. It won't restore as-is, even though it was fixed once
I will now play 5.4 to 6.x upgrade scenario on this server, I will try again backup/restore of unprivilegied CT after all my hosts run 6.x
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!