[SOLVED] Restoring LXC from PBS fails

Hr. Schmid

New Member
Apr 21, 2022
16
2
3
Attempts to restore a LXC fails with the following error message:

Proxmox reports the following:
recovering backed-up configuration from 'PBS:backup/ct/100/2022-04-12T22:00:03Z' restoring 'PBS:backup/ct/100/2022-04-12T22:00:03Z' now.. Error: error extracting archive - error at entry "": failed to apply directory metadata: failed to set ownership: Operation not permitted (os error 1) TASK ERROR: unable to restore CT 100 - command 'lxc-usernsexec -m u:0:100000:1005 -m g:0:100000:1005 -m u:1005:1005:1 -m g:1005:1005:1 -m u:1006:101006:64530 -m g:1006:101006:64530 -- /usr/bin/proxmox-backup-client restore '--crypt-mode=none' ct/100/2022-04-12T22:00:03Z root.pxar /var/lib/lxc/100/rootfs --allow-existing-dirs --repository root@pam@192.168.0.220:tank4backup' failed: exit code 255

On the other side, PBS reports sucess:
2022-04-23T00:15:33+02:00: starting new backup reader datastore 'tank4backup': "/mnt/datastore/tank4backup" 2022-04-23T00:15:33+02:00: protocol upgrade done 2022-04-23T00:15:33+02:00: GET /download 2022-04-23T00:15:33+02:00: download "/mnt/datastore/tank4backup/ct/100/2022-04-12T22:00:03Z/index.json.blob" 2022-04-23T00:15:33+02:00: GET /download 2022-04-23T00:15:33+02:00: download "/mnt/datastore/tank4backup/ct/100/2022-04-12T22:00:03Z/root.pxar.didx" 2022-04-23T00:15:33+02:00: register chunks in 'root.pxar.didx' as downloadable. 2022-04-23T00:15:34+02:00: GET /chunk 2022-04-23T00:15:34+02:00: download chunk "/mnt/datastore/tank4backup/.chunks/309e/309ec75d1ac62a9339870480e78b1a41f934835726461ea0e2bfdea635ec2400" ... 2022-04-23T01:34:58+02:00: download chunk "/mnt/datastore/tank4backup/.chunks/56cb/56cb1ed369cb6fecf9b153925d149e31c9cccea1f79fe80e2407529d4b5b0720" 2022-04-23T01:34:58+02:00: reader finished successfully 2022-04-23T01:34:58+02:00: TASK OK

I already played around with the privilege levels (from backup / unprivileged / privileged) as mentioned in other threads but without success.

Both, PVE and PBS are up to date.

Does anyone have a clue what the problem might be - or even better - a proposal how to overcome the problem? Help's very much appreciated!

Regards,

Arno
 
Ok, meanwhile I think I have more details about the issue:

I have the following configuration (pct.conf) which I backup to the PBS:
Code:
# uid map%3A from uid 0 map 1005 uids (in the ct) to the range starting 100000 (on the host), so 0..1004 (ct) %E2%86%92 100000..101004 (host)
# we map 1 uid starting from uid 1005 onto 1005, so 1005 %E2%86%92 1005
# we map the rest of 65535 from 1006 upto 101006, so 1006..65535 %E2%86%92 101006..165535
arch: amd64
cores: 2
features: keyctl=1,nesting=1
hostname: nextcloud1
memory: 4000
mp0: SeagateHDD320GB:subvol-100-disk-0,mp=/mnt/hdd,backup=1,size=327681M
mp1: Tank_WDRED4TB:subvol-100-disk-0,mp=/mnt/hdd2,backup=1,size=3600G
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.xxx.yyy.zzz,hwaddr=AA:BB:CC:DD:EE:FF,ip=192.aaa.bbb.ccc/24,ip6=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-lvm:vm-100-disk-0,size=16G
swap: 4000
unprivileged: 1
lxc.idmap: u 0 100000 1005
lxc.idmap: g 0 100000 1005
lxc.idmap: u 1005 1005 1
lxc.idmap: g 1005 1005 1
lxc.idmap: u 1006 101006 64530
lxc.idmap: g 1006 101006 64530

So principally I have the guest system on local-lvm: vm-100-disk-0 and two mounted disks, SeagateHDD320GB: subvol-100-disk-0 - mounted to mp0 - and Tank_WDRED4TB: subvol-100-disk-0 - mounted to mp1.
I set the backup flag for the mounted drives because I also wanted the content of the mounted disk also to be backed up. This obviously works for the backup, but I think it now makes problems when trying to restore because all is contained in the one root.pxar.

When starting a restore from PVE via GUI I get asked for the target storage. The field is prefilled in my case with "SeagateHDD320GB" and when I choose this storage location, I encounter the errors posted above in #1.

When I enter "local-lvm" as storage location, which actually makes more sense since the guest system was / should be located there, I get the following situation when the restore starts:

1652163866057.png

logical volume "vm-100-disk-0" is created on storage "local-lvm", which I think is correct, but also the disks for the mounted disks are created on the local-lvm as vm-100-disk-1 and vm-100-disk-2. This obviously cannot work.

I'd expect that the disks for the mounted disks would get created on SeagateHDD320GB as subvol-100-disk-0 and Tank_WDRED4TB as subvol-100-disk-0 respectively and restored there!

Is there a way to influence the restore in a way that this can be achieved?
Or how can I exclude the mounted disks from the restore in the first hand and restore them separately later on. Whitin the GUI this seems not to be possible, and for the CLI proxmox-bacup-client restore I couldn't find any hints about excluding.

Regards
 
1. make sure that your custom mapping works on the target system where you are restoring
How would I do this? The whole container/guest is gone due to the failed restoring efforts, and I assume that the mapping is done "inside" the guest system, which I'm trying to restore....
 
is this an in-place restore (the container was running before on this exact host)? if so, then the mapping should still work ;) try restoring with pct restore (also allows you to put the mountpoints on the desired storages), if that fails, please post the full output here. thanks!
 
Yes, it's an inplace-restore. Will give it a try with pct restore this evening.

Thanks so far for your help.
 
Update: still not successful, It's quite a hassle and getting slightly frustrating....

After finding a set of options and the hopefully correct syntax so that pct doesn't complain and obviously starts the restore process, it then quits again with :
Error: error extracting archive - error at entry "bash": failed to create file "bash": EEXIST: File exists

What else do I need to do?

Here is my command and the corresponing output:
Code:
pct restore 100 PBS:backup/ct/100/2022-04-12T22:00:03Z --rootfs local-lvm:vm-100-disk-0,size=8G --mp0 SeagateHDD320GB:320,mp=/mnt/hdd,size=327681M --mp1 Tank_WDRED4TB:3600,mp=/mnt/hdd2 --ignore-unpack-errors=true
recovering backed-up configuration from
'PBS:backup/ct/100/2022-04-12T22:00:03Z'
restoring 'PBS:backup/ct/100/2022-04-12T22:00:03Z' now..
Error: error extracting archive - error at entry "bash": failed to
create file "bash": EEXIST: File exists
unable to restore CT 100 - command 'lxc-usernsexec -m u:0:100000:1005 -m
g:0:100000:1005 -m u:1005:1005:1 -m g:1005:1005:1 -m u:1006:101006:64530
-m g:1006:101006:64530 -- /usr/bin/proxmox-backup-client restore
'--crypt-mode=none' ct/100/2022-04-12T22:00:03Z root.pxar
/var/lib/lxc/100/rootfs --allow-existing-dirs --repository
root@pam@192.x.y.z:tank4backup' failed: exit code 255
 
--rootfs local-lvm:vm-100-disk-0,size=8G this is wrong (restore into existing volume, which must be empty but isn't) - what you want is --rootfs local-lvm:8 (allocate and format new volume and restore into that)
 
Still no success:
root@proxmox:~# pct restore 100 PBS:backup/ct/100/2022-04-12T22:00:03Z --rootfs local-lvm:8 --mp0 SeagateHDD320GB:320,mp=/mnt/hdd --mp1 Tank_WDRED4TB:3600,mp=/mnt/hdd2 --ignore-unpack-errors=true --unprivileged=1 recovering backed-up configuration from 'PBS:backup/ct/100/2022-04-12T22:00:03Z' Logical volume "vm-100-disk-0" created. Creating filesystem with 2097152 4k blocks and 524288 inodes Filesystem UUID: 891c8cc2-5487-4809-a01d-eb2ea3a36820 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 restoring 'PBS:backup/ct/100/2022-04-12T22:00:03Z' now.. Error: error extracting archive - error at entry "": failed to apply directory metadata: failed to set ownership: Operation not permitted (os error 1) Logical volume "vm-100-disk-0" successfully removed unable to restore CT 100 - command 'lxc-usernsexec -m u:0:100000:1005 -m g:0:100000:1005 -m u:1005:1005:1 -m g:1005:1005:1 -m u:1006:101006:64530 -m g:1006:101006:64530 -- /usr/bin/proxmox-backup-client restore '--crypt-mode=none' ct/100/2022-04-12T22:00:03Z root.pxar /var/lib/lxc/100/rootfs --allow-existing-dirs --repository root@pam@192.168.0.220:tank4backup' failed: exit code 255

actually it's the same error messaged as in post #1
 
so this means it fails to apply the metadata/chown the '/' directory - could you check in the backup archive what permissions it has? and also post the output of ls -lh /var/lib/lxc? thanks!
 
root@proxmox:~# ls -lh /var/lib/lxc total 24K drwxr-xr-x 3 root root 4.0K May 12 22:20 100 drwxr-xr-x 3 root root 4.0K Apr 26 20:29 100_bak drwxr-xr-x 3 root root 4.0K Apr 20 22:02 101 drwxr-xr-x 4 root root 4.0K May 5 09:21 102 drwxr-xr-x 4 root root 4.0K May 5 09:21 103 drwxr-xr-x 3 root root 4.0K Apr 25 21:04 110

but
root@proxmox:~# ls -lh /var/lib/lxc/100 total 4.0K drwxr-xr-x 12 100000 100000 4.0K May 14 10:25 rootfs

root@proxmox:/# ls -lh /SeagateHDD320GB/ total 34K drwxr-xr-x 13 100000 100000 17 May 14 10:27 subvol-100-disk-0 drwxr-xr-x 21 root root 21 Apr 20 22:00 subvol-101-disk-0 drwxr-xr-x 21 100000 100000 21 May 5 09:22 subvol-102-disk-0 drwxr-xr-x 21 100000 100000 21 May 5 09:22 subvol-103-disk-0

root@proxmox:/# ls -lh /Tank_WDRED4TB/ total 9.0K drwxrwx--- 17 100033 100033 22 Jul 7 2021 subvol-100-disk-0 drwxr-xr-x 4 100000 100000 6 May 14 10:29 subvol-100-disk-1

Where/How to check the permissions of "/" in the backup archive?
 
Last edited:
you can mount it or use the catalog shell (using proxmox-backup-client). could you also post the output of 'pveversion -v' and the contents of your storage.cfg please?
 
you can mount it or use the catalog shell (using proxmox-backup-client). could you also post the output of 'pveversion -v' and the contents of your storage.cfg please?
root@proxmox:~# /usr/bin/proxmox-backup-client catalog shell ct/100/2022-04-12T22:00:03Z root.pxar --repository root@pam@192.168.0.220:tank4backup Password for "root@pam": ************ Starting interactive shell pxar:/ > ls bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var .pxarexclude-cli pxar:/ > stat / File: / Size: 0 Type: directory Access: (755/drwxr-xr-x ) Uid: 0 Gid: 0 Modify: 2022-03-22 18:55:05 pxar:/ > stat /bin File: /bin Size: 0 Type: directory Access: (755/drwxr-xr-x ) Uid: 0 Gid: 0 Modify: 2022-03-17 06:43:32 pxar:/ > stat \mnt/hdd File: /mnt/hdd Size: 0 Type: directory Access: (770/drwxrwxrwx ) Uid: 1005 Gid: 1005 pxar:/ > stat \mnt/hdd2 File: /mnt/hdd2 Size: 0 Type: directory Access: (770/drwxrwxrwx ) Uid: 33 Gid: 33 Modify: 2021-07-07 23:15:01

root@proxmox:~# pveversion -v proxmox-ve: 7.2-1 (running kernel: 5.15.30-2-pve) pve-manager: 7.2-3 (running version: 7.2-3/c743d6c1) pve-kernel-helper: 7.2-2 pve-kernel-5.15: 7.2-1 pve-kernel-5.13: 7.1-9 pve-kernel-5.11: 7.0-10 pve-kernel-5.4: 6.4-7 pve-kernel-5.3: 6.1-6 pve-kernel-5.15.30-2-pve: 5.15.30-3 pve-kernel-5.13.19-6-pve: 5.13.19-15 pve-kernel-5.13.19-3-pve: 5.13.19-7 pve-kernel-5.13.19-2-pve: 5.13.19-4 pve-kernel-5.11.22-7-pve: 5.11.22-12 pve-kernel-5.11.22-5-pve: 5.11.22-10 pve-kernel-5.4.143-1-pve: 5.4.143-1 pve-kernel-5.3.18-3-pve: 5.3.18-3 pve-kernel-5.3.10-1-pve: 5.3.10-1 ceph-fuse: 14.2.21-1 corosync: 3.1.5-pve2 criu: 3.15-1+pve-1 glusterfs-client: 9.2-1 ifupdown: residual config ifupdown2: 3.1.0-1+pmx3 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-1 libknet1: 1.22-pve2 libproxmox-acme-perl: 1.4.2 libproxmox-backup-qemu0: 1.2.0-1 libpve-access-control: 7.1-8 libpve-apiclient-perl: 3.2-1 libpve-common-perl: 7.1-6 libpve-guest-common-perl: 4.1-2 libpve-http-server-perl: 4.1-1 libpve-storage-perl: 7.2-2 libqb0: 1.0.5-1 libspice-server1: 0.14.3-2.1 lvm2: 2.03.11-2.1 lxc-pve: 4.0.12-1 lxcfs: 4.0.12-pve1 novnc-pve: 1.3.0-3 proxmox-backup-client: 2.1.8-1 proxmox-backup-file-restore: 2.1.8-1 proxmox-mini-journalreader: 1.3-1 proxmox-widget-toolkit: 3.4-10 pve-cluster: 7.2-1 pve-container: 4.2-1 pve-docs: 7.2-2 pve-edk2-firmware: 3.20210831-2 pve-firewall: 4.2-5 pve-firmware: 3.4-1 pve-ha-manager: 3.3-4 pve-i18n: 2.7-1 pve-qemu-kvm: 6.2.0-5 pve-xtermjs: 4.16.0-1 qemu-server: 7.2-2 smartmontools: 7.2-pve3 spiceterm: 3.2-2 swtpm: 0.7.1~bpo11+1 vncterm: 1.7-1 zfsutils-linux: 2.1.4-pve1

dir: local path /var/lib/vz content backup,vztmpl,iso lvmthin: local-lvm thinpool data vgname pve content rootdir,images zfspool: SeagateHDD320GB pool SeagateHDD320GB content rootdir,images mountpoint /SeagateHDD320GB nodes proxmox sparse 0 zfspool: Tank_WDRED4TB pool Tank_WDRED4TB content rootdir,images mountpoint /Tank_WDRED4TB nodes proxmox sparse 0 pbs: PBS datastore tank4backup server 192.x.y.z content backup fingerprint xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx prune-backups keep-all=1 username root@pam
 
I tried but can't reproduce this issue at all (with custom idmap, lvm thin and ZFS target storages, ..). the 'rootfs' permissions/ownership look wrong in your output (it should be owner by host root:root), but even if I manually set that to the unprivileged user it gets correctly reset at the end of the restore..

one more thing worth trying:
- download the pxar archive "root.pxar" (via the PBS web UI) onto your PVE system
- try extracting it as host root: mkdir target_host; pxar extract /path/to/root.pxar ./target_host --verbose (if there are any warnings/errors, please post them here in code tags)
- try extracting it in a user namespace: mkdir target_unpriv; lxc-usernsexec -m u:0:100000:1005 -m g:0:100000:1005 -m u:1005:1005:1 -m g:1005:1005:1 -m u:1006:101006:64530 -m g:1006:101006:64530 -- pxar extract /path/to/root.pxar ./target_unpriv --verbose (same)
 
okay - I might have found a culprit.. could you post the output of stat mnt in the catalog shell of the archive that fails to restore?
 
Sorry, didn't find time yet to try your proposal of post #16...

Here output of stat mnt

pxar:/ > stat /mnt File: /mnt Size: 0 Type: directory Access: (755/drwxr-xr-x ) Uid: 0 Gid: 0 Modify: 2020-06-16 18:21:30 pxar:/ >
 
- download the pxar archive "root.pxar" (via the PBS web UI) onto your PVE system
Again not sure how to do that. When downloading through the web ui (either from PBS or PVE) I get a zip file "root.pxar.didx.zip" downloaded to e.g. my laptop. How would I download directly to the PVE? And get the root.pxar instead of the oot.pxar.didx.zip?
 
Sorry, didn't find time yet to try your proposal of post #16...

Here output of stat mnt

pxar:/ > stat /mnt File: /mnt Size: 0 Type: directory Access: (755/drwxr-xr-x ) Uid: 0 Gid: 0 Modify: 2020-06-16 18:21:30 pxar:/ >
okay, that is good! I was afraid you'd run into a second related issue, but this looks like it's not the case.

can you try upgrading to libpve-common-perl 7.2-2 once that hits the repositories? it should fix the restore issue:

https://git.proxmox.com/?p=pve-common.git;a=commit;h=6647801cb374e75b48d488484893559ffcfb1203

if you feel comfortable, you can also apply the diff to /usr/share/perl5/PVE/Tools.pm (apt install --reinstall libpve-common-perl will revert to the old version if something goes wrong).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!