BUG: Error Restoring LXCs when from Privileged Containers

stsinc

Member
Apr 15, 2021
66
0
11
Hi,

I am very very worried because I cannot restore several LXCs from a backup done on PBS.

Here is the error I get:
Code:
recovering backed-up configuration from 'pbs:backup/ct/109/2021-04-18T05:00:02Z'
Formatting '/mnt/p03ssd/images/109/vm-109-disk-0.raw', fmt=raw size=26843545600
Creating filesystem with 6553600 4k blocks and 1638400 inodes
Filesystem UUID: 38ab1316-6a4c-4572-956b-71eb460cd98f
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
restoring 'pbs:backup/ct/109/2021-04-18T05:00:02Z' now..
Error: error extracting archive - error at entry "": failed to apply directory metadata: failed to apply extended attributes: Operation not permitted (os error 1)
TASK ERROR: unable to restore CT 109 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client restore '--crypt-mode=none' ct/109/2021-04-18T05:00:02Z root.pxar /var/lib/lxc/109/rootfs --allow-existing-dirs --repository root@pam@192.168.1.111:bkphdd' failed: exit code 255

I must mention I have been able to restore several other backups of the exact same kind of LXCs.
Please let me know how I can avoid these errors.

Best,
Stephen
 
Last edited:
I have given further thoughts on the issue and noticed the LXCs that I could not restore where PRIVILEGED containers, with access to a NFS share.
 
Last edited:
there was a bug with ACL handling - if you update the client on the restoring side to a version >= 1.0.13-1, restoring should work again. if you upgrade the client on the backup side to a version >= 1.0.13-1, the backup archives created will not be affected by the bug and are restorable on older versions as well.

edit: didn't read the error message fully, sorry. your backup contains ACL entries, but your target storage does not support ACLs. you need to restore on a storage that supports ACLs. could you post your storage.cfg and the backed-up LXC config?
 
Last edited:
Hi Fabian,
Thanks for the response -- as you can see, the whole thing prevents me from sleeping: it is 5 in NY...
  • backed-up LXC config: I am in this situation because one cannot join a node with hosted containers to a cluster. So your recommendation is precisely:
    • to backup all containers
    • then delete them
    • then reinstall them once the node is added to the cluster
  • As a result, I do not have access to the backed-up LXC config any longer as I followed your advice and deleted it...
  • Here is the storage.cfg content:
dir: local path /var/lib/vz content vztmpl,backup,iso lvmthin: local-lvm thinpool data vgname pve content images,rootdir nfs: ds1bkp export /volume1/bkp path /mnt/pve/ds1bkp server 192.168.1.25 content backup,rootdir,snippets prune-backups keep-all=1 dir: p03ssd path /mnt/p03ssd content iso,rootdir,images,snippets,backup,vztmpl prune-backups keep-all=1 shared 0 nfs: ds1reg export /volume1/pmxreg path /mnt/pve/ds1reg server 192.168.1.25 content rootdir,images,iso,backup,vztmpl,snippets options vers=4 prune-backups keep-all=1 dir: p02bkp path /mnt/bkp content images,rootdir,iso,vztmpl,backup,snippets prune-backups keep-all=1 shared 0 pbs: pbs datastore bkphdd server 192.168.1.111 content backup fingerprint 13:d7:de:5e:b7:32:74:45:41:15:78:35:b2:ca:2d:d9:b3:5e:52:ab:2e:a8:dd:7a:77:c1:b6:56:09:f3:7a:68 prune-backups keep-all=1 username root@pam
 
the config is inside the backup - just use 'show configuration' on the GUI when viewing the backup listing..
 
also, if the container was privileged before (should be indicated by a flag in the config) it is possibly not restorable as unprivileged, so you might want to try that as well.
 
Thank you for your quick response>
Here is the backup config content:
arch: amd64 cores: 6 features: mount=nfs,nesting=1 hostname: reg memory: 8000 net0: name=eth0,bridge=vmbr0,gw=192.168.1.1,hwaddr=0E:A7:8B:26:42:BF,ip=192.168.1.46/24,type=veth onboot: 1 ostype: debian rootfs: local-lvm:vm-110-disk-0,size=25G swap: 512
 
As per restoring the container without privileges, that would not do the trick: I NEED a NFS connection in the container.
 
As per restoring the container without privileges, that would not do the trick: I NEED a NFS connection in the container.
ah, you didn't mean privileged (as in container root == hypervisor root), but just with special features enabled. can you still try to restore privileged as a test? thanks.
 
also, downloading the pxar archive and running pxar extract downloaded.pxar /target/path --verbose on it might shed some light on which file/dir is causing the trouble. adding --no-xattrs should give you access to your data, although if those xattrs are important they are now of course missing.