Trouble restoring CT - error extracting archive

phidauex

Member
Aug 23, 2020
3
0
21
43
Hi, I'm having trouble restoring an LXC container from my PBS, and I'm not sure how to interpret the errors. Hoping someone can take a look and either explain the error or suggest a fix. The key error seems to be Error: error extracting archive - error at entry "container-getty@1.service": failed to set ownership: Invalid argument (os error 22).

Details:
  • PVE 6.3-2 - recently updated
  • PBS 1.0-5 - recently updated
  • PBS is running in an LXC container on the main host, the datastore storage is a disk image located on my NAS.
  • Backup jobs, prune, verification, etc. all occur normally.
  • I restored a different CT in the same datastore to test restore functionality, and it restored correctly.
  • I am attempting to restore an unencrypted CT running Ubuntu - the original has been deleted. The weekly backup images all verified OK.
  • I can mount the archive using proxmox-backup-client mount ct/130/2020-12-03T06:30:36Z root.pxar /mnt/ct130data --repository proxmoxbackupserver:ds2 and browse the files.
I've tried restoring this particular CT (131) through the GUI and command line, as priviliged and unpriviliged, I've tried changing the ownership of the backup group, and I've tested other backup functions. Not sure what to try next...

Error on restore attempt (through the PVE GUI):

Code:
Logical volume "vm-131-disk-0" created.
mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks:    4096/8388608               done                          
Creating filesystem with 8388608 4k blocks and 2097152 inodes
Filesystem UUID: 09866d2c-430a-45f5-86dc-573355bc4fc2
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624

Allocating group tables:   0/256       done                          
Writing inode tables:   0/256       done                          
Creating journal (65536 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information:   0/256       done

Error: error extracting archive - error at entry "container-getty@1.service": failed to set ownership: Invalid argument (os error 22)
  Logical volume "vm-131-disk-0" successfully removed
TASK ERROR: unable to restore CT 131 - command 'lxc-usernsexec -m u:0:100000:98 -m g:0:100000:98 -m u:98:98:1 -m g:98:98:1 -m u:99:100099:64530 -m g:99:100099:64530 -m g:65534:165534:1 -- /usr/bin/proxmox-backup-client restore '--crypt-mode=none' ct/130/2020-12-03T06:30:36Z root.pxar /var/lib/lxc/131/rootfs --allow-existing-dirs --repository samley@pbs@proxmoxbackupserver.home:ds2' failed: exit code 255

Config file of the CT in question:
Code:
arch: amd64
cores: 3
hostname: nzbs
memory: 4096
mp0: /mnt/pve/MediaNAS,mp=/mnt/media
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=46:0F:C6:F4:0D:49,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: local-lvm:vm-130-disk-0,size=32G
swap: 1024
unprivileged: 1
lxc.idmap: u 0 100000 98
lxc.idmap: g 0 100000 98
lxc.idmap: u 98 98 1
lxc.idmap: g 98 98 1
lxc.idmap: u 99 100099 64530
lxc.idmap: g 99 100099 64530
lxc.idmap: g 65534 165534 1

Syslog:
Code:
Dec 03 21:19:05 proxmox1 pvedaemon[26400]: <root@pam> starting task UPID:proxmox1:00001294:001E66A4:5FC9B8B9:vzrestore:131:root@pam:
Dec 03 21:19:09 proxmox1 kernel: EXT4-fs (dm-9): mounted filesystem with ordered data mode. Opts: (null)
Dec 03 21:19:09 proxmox1 udisksd[705]: udisks_mount_get_mount_path: assertion 'mount->type == UDISKS_MOUNT_TYPE_FILESYSTEM' failed
Dec 03 21:19:09 proxmox1 systemd[27949]: var-lib-lxc-131-rootfs.mount: Succeeded.
Dec 03 21:19:09 proxmox1 systemd[1]: var-lib-lxc-131-rootfs.mount: Succeeded.
Dec 03 21:19:09 proxmox1 systemd[17094]: var-lib-lxc-131-rootfs.mount: Succeeded.
Dec 03 21:19:10 proxmox1 pvedaemon[4756]: unable to restore CT 131 - command 'lxc-usernsexec -m u:0:100000:98 -m g:0:100000:98 -m u:98:98:1 -m g:98:98:1 -m u:99:100099:64530 -m g:99:100099:64530 -m g:65534:165534:1 -- /usr/bin/proxmox-backup-client restore '--crypt-mode=none' ct/130/2020-12-03T06:30:36Z root.pxar /var/lib/lxc/131/rootfs --allow-existing-dirs --repository samley@pbs@proxmoxbackupserver.home:ds2' failed: exit code 255
Dec 03 21:19:10 proxmox1 pvedaemon[26400]: <root@pam> end task UPID:proxmox1:00001294:001E66A4:5FC9B8B9:vzrestore:131:root@pam: unable to restore CT 131 - command 'lxc-usernsexec -m u:0:100000:98 -m g:0:100000:98 -m u:98:98:1 -m g:98:98:1 -m u:99:100099:64530 -m g:99:100099:64530 -m g:65534:165534:1 -- /usr/bin/proxmox-backup-client restore '--crypt-mode=none' ct/130/2020-12-03T06:30:36Z root.pxar /var/lib/lxc/131/rootfs --allow-existing-dirs --repository samley@pbs@proxmoxbackupserver.home:ds2' failed: exit code 255

Appreciate any suggestions.

Thanks - Sam
 
Last edited:
Adding results of pveversion -v:

Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.78-1-pve)
pve-manager: 6.3-2 (running version: 6.3-2/22f57405)
pve-kernel-5.4: 6.3-2
pve-kernel-helper: 6.3-2
pve-kernel-5.4.78-1-pve: 5.4.78-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-1
libpve-common-perl: 6.3-1
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.3-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.5-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-3
pve-cluster: 6.2-1
pve-container: 3.3-1
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-7
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-1
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1
 
hi,

lxc.idmap: u 0 100000 98 lxc.idmap: g 0 100000 98 lxc.idmap: u 98 98 1 lxc.idmap: g 98 98 1 lxc.idmap: u 99 100099 64530 lxc.idmap: g 99 100099 64530 lxc.idmap: g 65534 165534 1
is your /etc/subuid and /etc/subgid file correct?
 
is your /etc/subuid and /etc/subgid file correct?

Hi, thanks for checking - I'm actually... not sure... I have only a limited idea of what the user substitution is doing, frankly, and had followed some guides a while back help give the user account in the CT the proper permissions to interact with the NFS share (mp0). My goal was to match the external user:group ids in the NFS share to those in the CT.

This is the content of the subuid and subgid files in the disk image:

Code:
➜  more subgid
user1:100000:65536
user2:165536:65536
➜  more subuid
user1:100000:65536
user2:165536:65536

It worked in the sense that the user accounts could access the NFS share with the right matched permissions, and survived multiple updates and restarts. I think where things went wrong is after attempting to move the CTs root disk image (using the options in the GUI), there was an error, and the CT stopped booting. That is when I went in to start the restore process, and started having the issues described.

This reminds me of a question - is it possible to modify the configuration file used for the restore? IE, if you think something in the config file is preventing the restore from occurring, can you override the restore with a fixed config file?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!