[SOLVED] Failed to apply acls

I had the same issue and the update fixed it.

I don't know whether it was related but after the update and subsequent reboot, network was down.
As I run Proxmox on iDrac hardware, I connected to the console via iDrac and I attempted to restart vmbr0 and the response was "eno1 not found".
Using software engineering approach, I rebooted once again and the error disappeared.
 
This error is rearing its head for me during a restore of a container.

I ran a backup from the source Proxmox server to PBS running on a secondary destination Proxmox server.

I am trying to restore the backup from the destination Proxmox server from its locally installed PBS.

Due to a config issue where the container volume size was not present in the config, I am having to restore via CLI.

My cli command is:


Code:
pct restore 3507 BACKUPS_PBS3:backup/ct/2507/2021-10-19T19:26:51Z --storage PM3-CNT-STORAGE --rootfs PM3-CNT-STORAGE:30 --mp1 PM3-CNT-STORAGE:subvol-3507-disk-1,mp=/home,backup=1,mountoptions=noatime,size=300G --mp2 PM3-CNT-STORAGE:subvol-3507-disk-2,mp=/var/lib/mysql-zfs_DATA,backup=1,mountoptions=noatime,size=15G --mp3 PM3-CNT-STORAGE:subvol-3507-disk-3,mp=/var/lib/mysql-zfs_LOGS,backup=1,mountoptions=noatime,size=15G

The error message is:

Code:
recovering backed-up configuration from 'BACKUPS_PBS3:backup/ct/2507/2021-10-19T19:26:51Z'
restoring 'BACKUPS_PBS3:backup/ct/2507/2021-10-19T19:26:51Z' now..
Error: error extracting archive - error at entry "": failed to apply directory metadata: failed to apply acls: EOPNOTSUPP: Operation not supported on transport endpoint
unable to restore CT 3507 - command '/usr/bin/proxmox-backup-client restore '--crypt-mode=none' ct/2507/2021-10-19T19:26:51Z root.pxar /var/lib/lxc/3507/rootfs --allow-existing-dirs --repository root@pam@localhost:pbs3.pve.fixd.eu-LOCAL' failed: exit code 255



Source Machine for Backup Details
pveversion --verbose
proxmox-ve: 6.4-1 (running kernel: 5.4.128-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-6
pve-kernel-helper: 6.4-6
pve-kernel-5.4.140-1-pve: 5.4.140-1
pve-kernel-5.4.128-1-pve: 5.4.128-2
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.98-1-pve: 5.4.98-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 8.6-2
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-1
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.5-pve1~bpo10+1




Destination Machine for Backup Details


pveversion --verbose
proxmox-ve: 6.4-1 (running kernel: 5.4.143-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-helper: 6.4-8
pve-kernel-5.4: 6.4-7
pve-kernel-5.4.143-1-pve: 5.4.143-1
pve-kernel-5.4.140-1-pve: 5.4.140-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve4~bpo10
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve1~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.6-pve1~bpo10+1
 
Last edited:
that seems to be a different issue - you have ACLs enabled, but your target does not support them..

also, this:

Code:
pct restore 3507 BACKUPS_PBS3:backup/ct/2507/2021-10-19T19:26:51Z --storage PM3-CNT-STORAGE --rootfs PM3-CNT-STORAGE:30 --mp1 PM3-CNT-STORAGE:subvol-3507-disk-1,mp=/home,backup=1,mountoptions=noatime,size=300G --mp2 PM3-CNT-STORAGE:subvol-3507-disk-2,mp=/var/lib/mysql-zfs_DATA,backup=1,mountoptions=noatime,size=15G --mp3 PM3-CNT-STORAGE:subvol-3507-disk-3,mp=/var/lib/mysql-zfs_LOGS,backup=1,mountoptions=noatime,size=15G

seems wrong (I guess you took the volume IDs from the backup?) you probably want the following:

Code:
pct restore 3507 BACKUPS_PBS3:backup/ct/2507/2021-10-19T19:26:51Z --storage PM3-CNT-STORAGE --rootfs PM3-CNT-STORAGE:30 --mp1 PM3-CNT-STORAGE:300,mp=/home,backup=1,mountoptions=noatime,size=300G --mp2 PM3-CNT-STORAGE:15,mp=/var/lib/mysql-zfs_DATA,backup=1,mountoptions=noatime,size=15G --mp3 PM3-CNT-STORAGE:15,mp=/var/lib/mysql-zfs_LOGS,backup=1,mountoptions=noatime,size=15G

to freshly create all mountpoint volumes (with their respective sizes).
 
that seems to be a different issue - you have ACLs enabled, but your target does not support them..

also, this:

Code:
pct restore 3507 BACKUPS_PBS3:backup/ct/2507/2021-10-19T19:26:51Z --storage PM3-CNT-STORAGE --rootfs PM3-CNT-STORAGE:30 --mp1 PM3-CNT-STORAGE:subvol-3507-disk-1,mp=/home,backup=1,mountoptions=noatime,size=300G --mp2 PM3-CNT-STORAGE:subvol-3507-disk-2,mp=/var/lib/mysql-zfs_DATA,backup=1,mountoptions=noatime,size=15G --mp3 PM3-CNT-STORAGE:subvol-3507-disk-3,mp=/var/lib/mysql-zfs_LOGS,backup=1,mountoptions=noatime,size=15G

seems wrong (I guess you took the volume IDs from the backup?) you probably want the following:

Code:
pct restore 3507 BACKUPS_PBS3:backup/ct/2507/2021-10-19T19:26:51Z --storage PM3-CNT-STORAGE --rootfs PM3-CNT-STORAGE:30 --mp1 PM3-CNT-STORAGE:300,mp=/home,backup=1,mountoptions=noatime,size=300G --mp2 PM3-CNT-STORAGE:15,mp=/var/lib/mysql-zfs_DATA,backup=1,mountoptions=noatime,size=15G --mp3 PM3-CNT-STORAGE:15,mp=/var/lib/mysql-zfs_LOGS,backup=1,mountoptions=noatime,size=15G

to freshly create all mountpoint volumes (with their respective sizes).
Hey fabian,

Thanks for the reply. I was able to get it to restore with your given command. Really struggled to figure out the cli for the restore command for quite a bit before I posted.

The error message regarding ACLs actually resolved too with your command.

From what I could tell it was doing some really weird and undefined behavior with my command. Looked like it was restoring my mp volumes to the correct location but the rootfs to my zfs root partition in the default data folder? Not really sure exactly why.
 
Looks like this happened again when I tried to restore one of my containers. I had to use older backup to get it to restore.

Below are my current package versions:

Header
Proxmox
Virtual Environment 7.2-11
Search
Node 'pveit1'
Hour (average)
CPU usage 12.46% of 12 CPU(s)
IO delay 28.91%
Load average 12.43,20.93,13.07
RAM usage 36.91% (11.49 GiB of 31.14 GiB)
KSM sharing 0 B
/ HD space 33.00% (31.00 GiB of 93.93 GiB)
SWAP usage 0.00% (0 B of 8.00 GiB)
CPU(s) 12 x Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz (1 Socket)
Kernel Version Linux 5.15.53-1-pve #1 SMP PVE 5.15.53-1 (Fri, 26 Aug 2022 16:53:52 +0200)
PVE Manager Version pve-manager/7.2-11/b76d3178
Repository Status Proxmox VE updates Non production-ready repository enabled!
Server View
Logs
()
proxmox-ve: 7.2-1 (running kernel: 5.15.53-1-pve)
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)
pve-kernel-helper: 7.2-12
pve-kernel-5.15: 7.2-10
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-4
pve-kernel-5.15.53-1-pve: 5.15.53-1
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.15.39-2-pve: 5.15.39-2
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.15.35-3-pve: 5.15.35-6
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.4.124-1-pve: 5.4.124-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: not correctly installed
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-8
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.6-1
proxmox-backup-file-restore: 2.2.6-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-6
pve-firmware: 3.5-1
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-3
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1
 
Looks like this happened again when I tried to restore one of my containers. I had to use older backup to get it to restore.

Below are my current package versions:

Header
Proxmox
Virtual Environment 7.2-11
Search
Node 'pveit1'
Hour (average)
CPU usage 12.46% of 12 CPU(s)
IO delay 28.91%
Load average 12.43,20.93,13.07
RAM usage 36.91% (11.49 GiB of 31.14 GiB)
KSM sharing 0 B
/ HD space 33.00% (31.00 GiB of 93.93 GiB)
SWAP usage 0.00% (0 B of 8.00 GiB)
CPU(s) 12 x Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz (1 Socket)
Kernel Version Linux 5.15.53-1-pve #1 SMP PVE 5.15.53-1 (Fri, 26 Aug 2022 16:53:52 +0200)
PVE Manager Version pve-manager/7.2-11/b76d3178
Repository Status Proxmox VE updates Non production-ready repository enabled!
Server View
Logs
()
proxmox-ve: 7.2-1 (running kernel: 5.15.53-1-pve)
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)
pve-kernel-helper: 7.2-12
pve-kernel-5.15: 7.2-10
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-4
pve-kernel-5.15.53-1-pve: 5.15.53-1
pve-kernel-5.15.39-4-pve: 5.15.39-4
pve-kernel-5.15.39-3-pve: 5.15.39-3
pve-kernel-5.15.39-2-pve: 5.15.39-2
pve-kernel-5.15.39-1-pve: 5.15.39-1
pve-kernel-5.15.35-3-pve: 5.15.35-6
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.4.124-1-pve: 5.4.124-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: not correctly installed
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-8
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.6-1
proxmox-backup-file-restore: 2.2.6-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-6
pve-firmware: 3.5-1
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-3
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1

I think the issue was I as trying to do mountpoints inside the container which didn't work. I wish the backup process would give some kind of a warning. If it can't be done then no worries. I just have to pay closer attention to this in the future. I was able to restore from older backups before I was messing with the mountpoints.

I've done mountpoints in other containers before without issue. I was trying a different way which didn't work. Not worried about this now as I've moved my apps over to a regular VM.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!