[SOLVED] Failed to apply acls

I had the same issue and the update fixed it.

I don't know whether it was related but after the update and subsequent reboot, network was down.
As I run Proxmox on iDrac hardware, I connected to the console via iDrac and I attempted to restart vmbr0 and the response was "eno1 not found".
Using software engineering approach, I rebooted once again and the error disappeared.
 
This error is rearing its head for me during a restore of a container.

I ran a backup from the source Proxmox server to PBS running on a secondary destination Proxmox server.

I am trying to restore the backup from the destination Proxmox server from its locally installed PBS.

Due to a config issue where the container volume size was not present in the config, I am having to restore via CLI.

My cli command is:


Code:
pct restore 3507 BACKUPS_PBS3:backup/ct/2507/2021-10-19T19:26:51Z --storage PM3-CNT-STORAGE --rootfs PM3-CNT-STORAGE:30 --mp1 PM3-CNT-STORAGE:subvol-3507-disk-1,mp=/home,backup=1,mountoptions=noatime,size=300G --mp2 PM3-CNT-STORAGE:subvol-3507-disk-2,mp=/var/lib/mysql-zfs_DATA,backup=1,mountoptions=noatime,size=15G --mp3 PM3-CNT-STORAGE:subvol-3507-disk-3,mp=/var/lib/mysql-zfs_LOGS,backup=1,mountoptions=noatime,size=15G

The error message is:

Code:
recovering backed-up configuration from 'BACKUPS_PBS3:backup/ct/2507/2021-10-19T19:26:51Z'
restoring 'BACKUPS_PBS3:backup/ct/2507/2021-10-19T19:26:51Z' now..
Error: error extracting archive - error at entry "": failed to apply directory metadata: failed to apply acls: EOPNOTSUPP: Operation not supported on transport endpoint
unable to restore CT 3507 - command '/usr/bin/proxmox-backup-client restore '--crypt-mode=none' ct/2507/2021-10-19T19:26:51Z root.pxar /var/lib/lxc/3507/rootfs --allow-existing-dirs --repository root@pam@localhost:pbs3.pve.fixd.eu-LOCAL' failed: exit code 255



Source Machine for Backup Details
pveversion --verbose
proxmox-ve: 6.4-1 (running kernel: 5.4.128-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-6
pve-kernel-helper: 6.4-6
pve-kernel-5.4.140-1-pve: 5.4.140-1
pve-kernel-5.4.128-1-pve: 5.4.128-2
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.98-1-pve: 5.4.98-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 8.6-2
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-1
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.5-pve1~bpo10+1




Destination Machine for Backup Details


pveversion --verbose
proxmox-ve: 6.4-1 (running kernel: 5.4.143-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-helper: 6.4-8
pve-kernel-5.4: 6.4-7
pve-kernel-5.4.143-1-pve: 5.4.143-1
pve-kernel-5.4.140-1-pve: 5.4.140-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve4~bpo10
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve1~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.6-pve1~bpo10+1
 
Last edited:
that seems to be a different issue - you have ACLs enabled, but your target does not support them..

also, this:

Code:
pct restore 3507 BACKUPS_PBS3:backup/ct/2507/2021-10-19T19:26:51Z --storage PM3-CNT-STORAGE --rootfs PM3-CNT-STORAGE:30 --mp1 PM3-CNT-STORAGE:subvol-3507-disk-1,mp=/home,backup=1,mountoptions=noatime,size=300G --mp2 PM3-CNT-STORAGE:subvol-3507-disk-2,mp=/var/lib/mysql-zfs_DATA,backup=1,mountoptions=noatime,size=15G --mp3 PM3-CNT-STORAGE:subvol-3507-disk-3,mp=/var/lib/mysql-zfs_LOGS,backup=1,mountoptions=noatime,size=15G

seems wrong (I guess you took the volume IDs from the backup?) you probably want the following:

Code:
pct restore 3507 BACKUPS_PBS3:backup/ct/2507/2021-10-19T19:26:51Z --storage PM3-CNT-STORAGE --rootfs PM3-CNT-STORAGE:30 --mp1 PM3-CNT-STORAGE:300,mp=/home,backup=1,mountoptions=noatime,size=300G --mp2 PM3-CNT-STORAGE:15,mp=/var/lib/mysql-zfs_DATA,backup=1,mountoptions=noatime,size=15G --mp3 PM3-CNT-STORAGE:15,mp=/var/lib/mysql-zfs_LOGS,backup=1,mountoptions=noatime,size=15G

to freshly create all mountpoint volumes (with their respective sizes).
 
that seems to be a different issue - you have ACLs enabled, but your target does not support them..

also, this:

Code:
pct restore 3507 BACKUPS_PBS3:backup/ct/2507/2021-10-19T19:26:51Z --storage PM3-CNT-STORAGE --rootfs PM3-CNT-STORAGE:30 --mp1 PM3-CNT-STORAGE:subvol-3507-disk-1,mp=/home,backup=1,mountoptions=noatime,size=300G --mp2 PM3-CNT-STORAGE:subvol-3507-disk-2,mp=/var/lib/mysql-zfs_DATA,backup=1,mountoptions=noatime,size=15G --mp3 PM3-CNT-STORAGE:subvol-3507-disk-3,mp=/var/lib/mysql-zfs_LOGS,backup=1,mountoptions=noatime,size=15G

seems wrong (I guess you took the volume IDs from the backup?) you probably want the following:

Code:
pct restore 3507 BACKUPS_PBS3:backup/ct/2507/2021-10-19T19:26:51Z --storage PM3-CNT-STORAGE --rootfs PM3-CNT-STORAGE:30 --mp1 PM3-CNT-STORAGE:300,mp=/home,backup=1,mountoptions=noatime,size=300G --mp2 PM3-CNT-STORAGE:15,mp=/var/lib/mysql-zfs_DATA,backup=1,mountoptions=noatime,size=15G --mp3 PM3-CNT-STORAGE:15,mp=/var/lib/mysql-zfs_LOGS,backup=1,mountoptions=noatime,size=15G

to freshly create all mountpoint volumes (with their respective sizes).
Hey fabian,

Thanks for the reply. I was able to get it to restore with your given command. Really struggled to figure out the cli for the restore command for quite a bit before I posted.

The error message regarding ACLs actually resolved too with your command.

From what I could tell it was doing some really weird and undefined behavior with my command. Looked like it was restoring my mp volumes to the correct location but the rootfs to my zfs root partition in the default data folder? Not really sure exactly why.
 
Looks like this happened again when I tried to restore one of my containers. I had to use older backup to get it to restore.

Below are my current package versions:

Header
Proxmox
Virtual Environment 7.2-11
Search
Node 'pveit1'
Hour (average)
CPU usage 12.46% of 12 CPU(s)
IO delay 28.91%
Load average 12.43,20.93,13.07
RAM usage 36.91% (11.49 GiB of 31.14 GiB)
KSM sharing 0 B
/ HD space 33.00% (31.00 GiB of 93.93 GiB)
SWAP usage 0.00% (0 B of 8.00 GiB)
CPU(s) 12 x Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz (1 Socket)
Kernel Version Linux 5.15.53-1-pve #1 SMP PVE 5.15.53-1 (Fri, 26 Aug 2022 16:53:52 +0200)
PVE Manager Version pve-manager/7.2-11/b76d3178
Repository Status Proxmox VE updates Non production-ready repository enabled!
Server View
Logs
()
proxmox-ve: 7.2-1 (running kernel: 5.15.53-1-pve)
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)
pve-kernel-helper: 7.2-12
pve-kernel-5.15: 7.2-10
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-4
pve-kernel-5.15.53-1-pve: 5.15.53-1
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.15.39-2-pve: 5.15.39-2
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.15.35-3-pve: 5.15.35-6
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.4.124-1-pve: 5.4.124-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: not correctly installed
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-8
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.6-1
proxmox-backup-file-restore: 2.2.6-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-6
pve-firmware: 3.5-1
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-3
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1
 
Looks like this happened again when I tried to restore one of my containers. I had to use older backup to get it to restore.

Below are my current package versions:

Header
Proxmox
Virtual Environment 7.2-11
Search
Node 'pveit1'
Hour (average)
CPU usage 12.46% of 12 CPU(s)
IO delay 28.91%
Load average 12.43,20.93,13.07
RAM usage 36.91% (11.49 GiB of 31.14 GiB)
KSM sharing 0 B
/ HD space 33.00% (31.00 GiB of 93.93 GiB)
SWAP usage 0.00% (0 B of 8.00 GiB)
CPU(s) 12 x Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz (1 Socket)
Kernel Version Linux 5.15.53-1-pve #1 SMP PVE 5.15.53-1 (Fri, 26 Aug 2022 16:53:52 +0200)
PVE Manager Version pve-manager/7.2-11/b76d3178
Repository Status Proxmox VE updates Non production-ready repository enabled!
Server View
Logs
()
proxmox-ve: 7.2-1 (running kernel: 5.15.53-1-pve)
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)
pve-kernel-helper: 7.2-12
pve-kernel-5.15: 7.2-10
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-4
pve-kernel-5.15.53-1-pve: 5.15.53-1
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.15.39-2-pve: 5.15.39-2
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.15.35-3-pve: 5.15.35-6
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.4.124-1-pve: 5.4.124-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: not correctly installed
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-8
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.6-1
proxmox-backup-file-restore: 2.2.6-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-6
pve-firmware: 3.5-1
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-3
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1

I think the issue was I as trying to do mountpoints inside the container which didn't work. I wish the backup process would give some kind of a warning. If it can't be done then no worries. I just have to pay closer attention to this in the future. I was able to restore from older backups before I was messing with the mountpoints.

I've done mountpoints in other containers before without issue. I was trying a different way which didn't work. Not worried about this now as I've moved my apps over to a regular VM.
 
Sorry to ressurect this thread. I thought I had the same problem as OP but it might be similar to @Darkk's comment.

Trying to restore an LXC via PBS on PVE, both running latest. I get this error.


Code:
recovering backed-up configuration from 'pbs-001-datastore:backup/ct/902/2025-01-20T14:30:14Z'
Using encryption key from file descriptor.. 
Fingerprint: xx:xx:xx:xx:xx:xx:xx:xx 
restoring 'pbs-001-datastore:backup/ct/902/2025-01-20T14:30:14Z' now..
Using encryption key from file descriptor.. 
Fingerprint: xx:xx:xx:xx:xx:xx:xx:xx   
Warning: "/var/log/journal/b0354fxxxe44f05b89a9xxx90exxx38" - ACL invalid, attempting restore anyway..
Error: error extracting archive - encountered unexpected error during extraction: error at entry "": failed to leave directory: failed to apply directory metadata: failed to apply acls: EINVAL: Invalid argument
TASK ERROR: unable to restore CT 902 - command 'lxc-usernsexec -m u:0:xxxx:6xxx6 -m g:0:xxxx:6xxx6 -- /usr/bin/proxmox-backup-client restore '--crypt-mode=encrypt' '--keyfd=14' ct/902/2025-01-20T14:30:14Z root.pxar /var/lib/lxc/902/rootfs --allow-existing-dirs --repository root@pam@10.x.x.xxx:pbs-001-datastore' failed: exit code 255

I've tried restoring to a different machine and no avail.

Here is a snippet of the config.


Code:
#<div align='center'><a href='https://Helper-Scripts.com' target='_blank' rel='noopener noreferrer'><img src='https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/images/logo-81x112.png'/></a>
#
#  # Dockge LXC
#
#  <a href='https://ko-fi.com/proxmoxhelperscripts'><img src='https://img.shields.io/badge/&#x2615;-Buy me a coffee-blue' /></a>
#  </div>
# Allow cgroup access
# Pass through device files
arch: amd64
cores: 4
features: nesting=1
hostname: xx-xxx
memory: 8192
net0: name=eth0,bridge=vmbr0,gw=10.x.x.xxx,hwaddr=BC:xx:xx:xx:xx:03,ip=10.x.x.xxx/32,tag=xx,type=veth
onboot: 1
ostype: ubuntu
rootfs: mass:subvol-902-disk-0,size=160G
swap: 4096
tags: dockge
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.cgroup2.devices.allow: c 188:* rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/serial/by-id  dev/serial/by-id  none bind,optional,create=dir
lxc.mount.entry: /dev/ttyUSB0       dev/ttyUSB0       none bind,optional,create=file
lxc.mount.entry: /dev/ttyUSB1       dev/ttyUSB1       none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM0       dev/ttyACM0       none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM1       dev/ttyACM1       none bind,optional,create=file
lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 235:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.idmap: u 0 xxxx 6xxx6
lxc.idmap: g 0 xxxx 6xxx6

proxmox-backup-client restore '--crypt-mode=encrypt' '--keyfd=14' '--ignore-acls' ct/902/2025-01-20T14:30:14Z root.pxar /var/lib/lxc/902/rootfs --allow-existing-dirs --repository root@pam@10.x.x.xxx:pbs-001-datastore

lxc-usernsexec -m u:0:xxxx:6xxx6 -m g:0:xxxx:6xxx6 -- /usr/bin/proxmox-backup-client restore '--crypt-mode=encrypt' '--keyfd=14' ct/902/2025-01-20T14:30:14Z root.pxar /var/lib/lxc/902/rootfs --allow-existing-dirs --repository root@pam@10.x.x.xxx:pbs-001-datastore

I am downloading it's filesystem now as last resort, but this thing had about a dozen containers all configured ready to go. A shame the backups aren't just working out the gate.

Any assistance would be appreciated.
 
Last edited:
1. docker in LXC has a lot of problems, and is thus not supported
2. could you dump the ACLs of the problematic path?
 
1. docker in LXC has a lot of problems, and is thus not supported
2. could you dump the ACLs of the problematic path?
Thanks Fabian, In the meantime of my post I had changed my mind thanks to a mate of mine who basically said the same thing. LXCs should be for single purpose applications and what i was facilitating was a bit beyond its scope. I have opted to completely rebuild the instance using a virtual machine instead of an lxc.

I'll use the recovered docker filesystem and place it on top of my new one, cross my fingers