[SOLVED] Failed to apply acls

nakaori

New Member
Aug 6, 2020
17
1
3
36
Hi,

as I've stated in my other post, I currently cannot restore lxc snapshots. At first I thought this had to do with some options, that were set in the pct.conf but this now also happened on another ct with standard options.

recovering backed-up configuration from 'backup:backup/ct/108/2021-03-30T23:30:13Z' Using encryption key from file descriptor.. Fingerprint: c7:91-:93:89 Logical volume "vm-108-disk-1" created. Creating filesystem with 2097152 4k blocks and 524288 inodes Filesystem UUID: 939c27dd-a175-48fb-a4f3-a1b3bd615218 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Logical volume "vm-108-disk-0" successfully removed restoring 'backup:backup/ct/108/2021-03-30T23:30:13Z' now.. Using encryption key from file descriptor.. Fingerprint: c7:91-:93:89 [B]Error: error extracting archive - error at entry "": failed to apply directory metadata: failed to apply acls: Error while restoring ACL - ACL invalid Logical volume "vm-108-disk-1" successfully removed[/B] TASK ERROR: unable to restore CT 108 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client restore '--crypt-mode=encrypt' '--keyfd=13' ct/108/2021-03-30T23:30:13Z root.pxar /var/lib/lxc/108/rootfs --allow-existing-dirs --repository sakura@pbs@backup:Sakura' failed: exit code 255

I have tried to restore with the flag acl=0 which leads to another error:

Error: error extracting archive - error at entry "system.journal": failed to apply acls: EOPNOTSUPP: Operation not supported on transport endpoint

I have also tried to restore privileged / unprivileged to no success.



Up to this point we have tried this on 2 different PBS and on 2 different PVE Servers. Every CT has the same issue.
 
Last edited:
are all the containers from/to ZFS? can you try with another storage type?
 
are all the containers from/to ZFS? can you try with another storage type?

Server A is ZFS, server B is not. I also tried restoring to an ext4 storage on server A which led to the same error.

One of my colleagues will try the same thing on another pve / pbs construct which has nothing to do with ours later.
 
thanks. if you can reproduce this with a container that doesn't have sensitive data, it would be great to get a copy of the pxar file for debugging..
 
I just tried to create a new container which I could provide and backed it up. I tried encrypted and unencrypted and I can restore both of them.

I tried to restore files from the shell to a new container which also resulted in this error:

pxar:/ > restore /GUESTS/subvol-122-disk-0/ Error: failed to apply directory metadata: failed to apply acls: Error while restoring ACL - ACL invalid pxar:/ >

---

can I somehow restore these files and make the container bootable? I tried mounting the snapshot and rsyncing all the files over to a new container but that lead to a segfault on start.


Edit: I just restored a snapshot from March 25th from a machine that didn't work on the latest snapshot. So there must have been some changes on PBS or the client that trigger this @fabian
 
Last edited:
I just tried to create a new container which I could provide and backed it up. I tried encrypted and unencrypted and I can restore both of them.

I tried to restore files from the shell to a new container which also resulted in this error:

pxar:/ > restore /GUESTS/subvol-122-disk-0/ Error: failed to apply directory metadata: failed to apply acls: Error while restoring ACL - ACL invalid pxar:/ >

---

can I somehow restore these files and make the container bootable? I tried mounting the snapshot and rsyncing all the files over to a new container but that lead to a segfault on start.


Edit: I just restored a snapshot from March 25th from a machine that didn't work on the latest snapshot. So there must have been some changes on PBS or the client that trigger this @fabian
can you check which version on the client side were used for the working snapshot, and which for the broken ones? /var/log/apt/history.log should help with finding out..
 
proxmox-backup-client:amd64 (1.0.8-1, 1.0.11-1)

This update was done on march 25th
 
Hi @fabian i have the same issue, unable to restore any of my LXC containers, i was able to reproduce this issue with new LXC container. How can i send you the pxar file?
 
Hi @fabian i have the same issue, unable to restore any of my LXC containers, i was able to reproduce this issue with new LXC container. How can i send you the pxar file?
great! download it using the web interface of PBS, then upload it somewhere public and send me the link (f.gruenbichler@proxmox.com) or post it here.
 
fix for actual issue with pxar creation: https://git.proxmox.com/?p=proxmox-backup.git;a=commit;h=9f40e09d0a53a2a6b89e0cb3f60d5e46bfd7dbde
workaround for restoring broken pxar archives: https://git.proxmox.com/?p=proxmox-backup.git;a=commit;h=79e58a903e7065221fa59decf8fbee9417183be4

the latter will indicate which directories are affected - if it's only /var/log/journal and its sub-directories no actions should need to be taken. if you know there are no rather weird ACLs set on the affected paths (ACL mask being more permissive than the owning GROUP_OBJ), no actions need to be taken. if you still have the original source, you can check with getfacl from the "acl" package.

I'll reply here once a package with the fixes is available publicly.
 
There's an updated proxmox-backup-client package (version 1.0.13-1) on pvetest since Friday evening and since of now also on pve-no-subscription - it has improved handling of ACL entries on restore and create can you try with that one?
 
  • Like
Reactions: Meister Mopper
I instantly updated the client on proxmox backup server and also on the pve host.

All works fine:

Code:
recovering backed-up configuration from 'pbsbackp:backup/ct/300/2021-04-06T06:39:32Z'
  Logical volume "vm-301-disk-0" created.
Creating filesystem with 2097152 4k blocks and 524288 inodes
Filesystem UUID: 55e3963f-043b-4775-984c-8dd87f954012
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
restoring 'pbsbackp:backup/ct/300/2021-04-06T06:39:32Z' now..
Detected container architecture: amd64
merging backed-up and given configuration..
TASK OK

Thanks a lot!
 
Last edited:
  • Like
Reactions: t.lamprecht
I had the same problem this weekend when I was about to use pbs-backups to move containers to a new server.

I can also confirm that everything is working as expected in 1.0.13-1.

Thank you!
 
Any idea when this will be released to the enterprise repo?

Edit: Or is it safe to download and install the single package from the pvetest repo?
 
Last edited:
Any idea when this will be released to the enterprise repo?
Thanks to the good feedback here, and the non-intrusive change required to fix it we deemed it to be safe to move now to the enterprise repository.
Edit: Or is it safe to download and install the single package from the pvetest repo?
It may not be always the case, but in this specific case it would be totally safe to do so (but now unnecessary, as it was moved out to all repositories).
 
Thanks to the good feedback here, and the non-intrusive change required to fix it we deemed it to be safe to move now to the enterprise repository.

It may not be always the case, but in this specific case it would be totally safe to do so (but now unnecessary, as it was moved out to all repositories).
YAY! A successful restore. Thanks to everyone who helped you identify and fix this quickly, and to proxmox for fast resolution.
 
  • Like
Reactions: t.lamprecht

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!