[SOLVED] Proxmox VE 9.0 BETA LCX Docker not working

quanto11

Member
Dec 11, 2021
62
8
13
36
Hey Guys,

I updated to VE 9.0 Beta, and since then, my LCX Docker apps haven't been running. Every container is showing the same error message:

Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "mqueue" to rootfs at "/dev/mqueue": change mount propagation through procfd: resolving path inside rootfs failed: lstat /var/lib/docker/overlay2/6c090805f6f9f15458ec448455ba68df8987c3779a68827306e6b168883930de/merged//dev/mqueue: permission denied: unknown

proxmox-ve: 9.0.0 (running kernel: 6.14.8-1-pve)pve-manager: 9.0.0~8 (running version: 9.0.0~8/08dc1724dedced56)proxmox-kernel-helper: 9.0.0proxmox-kernel-6.14.8-1-pve-signed: 6.14.8-1proxmox-kernel-6.14: 6.14.8-1proxmox-kernel-6.14.8-1-bpo12-pve-signed: 6.14.8-1~bpo12+1proxmox-kernel-6.14.5-1-bpo12-pve-signed: 6.14.5-1~bpo12+1proxmox-kernel-6.11.11-2-pve-signed: 6.11.11-2proxmox-kernel-6.11: 6.11.11-2proxmox-kernel-6.8.12-12-pve-signed: 6.8.12-12proxmox-kernel-6.8: 6.8.12-12pve-kernel-6.2.16-3-pve: 6.2.16-3amd64-microcode: 3.20250311.1ceph-fuse: 19.2.2-pve2corosync: 3.1.9-pve2criu: 4.1-1ifupdown2: 3.3.0-1+pmx7ksm-control-daemon: 1.5-1libjs-extjs: 7.0.0-5libproxmox-acme-perl: 1.7.0libproxmox-backup-qemu0: 2.0.1libproxmox-rs-perl: 0.4.1libpve-access-control: 9.0.2libpve-apiclient-perl: 3.4.0libpve-cluster-api-perl: 9.0.2libpve-cluster-perl: 9.0.2libpve-common-perl: 9.0.6libpve-guest-common-perl: 6.0.2libpve-http-server-perl: 6.0.1libpve-network-perl: 1.1.0libpve-rs-perl: 0.10.4libpve-storage-perl: 9.0.6libspice-server1: 0.15.2-1+b1lvm2: 2.03.31-2lxc-pve: 6.0.4-2lxcfs: 6.0.4-pve1novnc-pve: 1.6.0-3proxmox-backup-client: 4.0.2-1proxmox-backup-file-restore: 4.0.2-1proxmox-backup-restore-image: 1.0.0proxmox-firewall: 1.0.0proxmox-kernel-helper: 9.0.0proxmox-mail-forward: 1.0.1proxmox-mini-journalreader: 1.6proxmox-widget-toolkit: 5.0.2pve-cluster: 9.0.2pve-container: 6.0.2pve-docs: 9.0.4pve-edk2-firmware: 4.2025.02-4pve-esxi-import-tools: 1.0.0pve-firewall: 6.0.2pve-firmware: 3.16-3pve-ha-manager: 5.0.1pve-i18n: 3.5.0pve-qemu-kvm: 10.0.2-4pve-xtermjs: 5.5.0-2qemu-server: 9.0.4smartmontools: 7.4-pve1spiceterm: 3.4.0swtpm: 0.8.0+pve2vncterm: 1.9.0zfsutils-linux: 2.3.3-pve1

arch: amd64
cores: 2
features: nesting=1
hostname: test
memory: 512
nameserver: 192.168.1.254
net0: name=eth0,bridge=vmbr0,hwaddr=BC:24:11:4D:81:B8,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: VMs:vm-107-disk-1,size=40G
searchdomain: test.local
swap: 512
unprivileged: 1

[ 5237.476109] audit: type=1400 audit(1752866920.816:2225): apparmor="DENIED" operation="getattr" class="posix_mqueue" profile="/usr/bin/lxc-start" name="/" pid=98924 comm="vgs" requested="getattr" denied="getattr"class="posix_mqueue" fsuid=0 ouid=0
[ 5237.539888] audit: type=1400 audit(1752866920.880:2226): apparmor="DENIED" operation="getattr" class="posix_mqueue" profile="/usr/bin/lxc-start" name="/" pid=98925 comm="lvs" requested="getattr" denied="getattr"class="posix_mqueue" fsuid=0 ouid=0
[ 5237.592420] EXT4-fs (dm-17): mounted filesystem 7d2bcc93-c908-4086-872e-4caa1149b527 r/w with ordered data mode. Quota mode: none.
[ 5237.704927] audit: type=1400 audit(1752866921.045:2227): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-107_</var/lib/lxc>" pid=98938 comm="apparmor_parser"
[ 5238.157681] vmbr1: port 3(veth107i0) entered blocking state
[ 5238.157686] vmbr1: port 3(veth107i0) entered disabled state
[ 5238.157706] veth107i0: entered allmulticast mode
[ 5238.157744] veth107i0: entered promiscuous mode
[ 5238.196074] eth0: renamed from vethqv1HGv
[ 5238.553197] audit: type=1400 audit(1752866921.893:2228): apparmor="STATUS" operation="profile_load" label="lxc-107_</var/lib/lxc>//&:lxc-107_<-var-lib-lxc>:unconfined" name="balena-etcher" pid=99139 comm="apparmor_parser"
[ 5238.553372] audit: type=1400 audit(1752866921.893:2229): apparmor="STATUS" operation="profile_load" label="lxc-107_</var/lib/lxc>//&:lxc-107_<-var-lib-lxc>:unconfined" name="Discord" pid=99135 comm="apparmor_parser"
[ 5238.553706] audit: type=1400 audit(1752866921.894:2230): apparmor="STATUS" operation="profile_load" label="lxc-107_</var/lib/lxc>//&:lxc-107_<-var-lib-lxc>:unconfined" name="QtWebEngineProcess" pid=99138 comm="apparmor_parser"
[ 5238.553727] audit: type=1400 audit(1752866921.894:2231): apparmor="STATUS" operation="profile_load" label="lxc-107_</var/lib/lxc>//&:lxc-107_<-var-lib-lxc>:unconfined" name=4D6F6E676F444220436F6D70617373 pid=99137 comm="apparmor_parser"
[ 5238.553959] audit: type=1400 audit(1752866921.894:2232): apparmor="STATUS" operation="profile_load" label="lxc-107_</var/lib/lxc>//&:lxc-107_<-var-lib-lxc>:unconfined" name="brave" pid=99140 comm="apparmor_parser"
[ 5238.554125] audit: type=1400 audit(1752866921.894:2233): apparmor="STATUS" operation="profile_load" label="lxc-107_</var/lib/lxc>//&:lxc-107_<-var-lib-lxc>:unconfined" name="1password" pid=99134 comm="apparmor_parser"
[ 5238.555018] audit: type=1400 audit(1752866921.895:2234): apparmor="STATUS" operation="profile_load" label="lxc-107_</var/lib/lxc>//&:lxc-107_<-var-lib-lxc>:unconfined" name="buildah" pid=99141 comm="apparmor_parser"
[ 5238.563019] vmbr1: port 3(veth107i0) entered blocking state
[ 5238.563024] vmbr1: port 3(veth107i0) entered forwarding state
[ 5240.160731] overlayfs: fs on '/var/lib/docker/overlay2/check-overlayfs-support2644685483/lower2' does not support file handles, falling back to xino=off.
[ 5240.179910] overlayfs: fs on '/var/lib/docker/overlay2/metacopy-check2468215850/l1' does not support file handles, falling back to xino=off.
[ 5240.560755] overlayfs: fs on '/var/lib/docker/overlay2/opaque-bug-check2248299780/l2' does not support file handles, falling back to xino=off.

Does anyone have a solution for this?

PS: i m also not able to restore a backup

Error: error extracting archive - encountered unexpected error during extraction: error at entry "perl5.38.2": failed to extract hardlink: EINVAL: Invalid argument
Logical volume "vm-107-disk-0" successfully removed.
TASK ERROR: unable to restore CT 107 - command 'lxc-usernsexec -m u:0:100000:65536 -m g:0:100000:65536 -- /usr/bin/proxmox-backup-client restore '--crypt-mode=none' ct/107/2025-07-17T20:34:45Z root.pxar /var/lib/lxc/107/rootfs --allow-existing-dirs --repository pbs@pbs@192.168.1.1:Backup' failed: exit code 255
 
Last edited:
Yes, migrate to a vm instead of an LXC for Docker Hostings. Docker inside lxc is known to break after updates, from time to time, like in this older thread from the German forum:



The reason is, that docker and lxc use the same lowlevel kernel and system stuff which might conflict with eachother. A Update might introduce changes with previous unknown causes.

There is a reason for following recommendation in the offiziellen Proxmox VE docs:

If you want to run application containers, for example, Docker images, it is recommended that you run them inside a Proxmox QEMU VM. This will give you all the advantages of application containerization, while also providing the benefits that VMs offer, such as strong isolation from the host and the ability to live-migrate, which otherwise isn’t possible with containers.
https://pve.proxmox.com/wiki/Linux_Container

If you use a lightweight distribution like Alpine or Debian and host all your docker instances from one VM you won't have much overhead compared to LXC
 
Last edited:
It's related to apparmor not working as expected for /dev/mqueue

I had the same problem this morning and with apparmor disabled it works fine as a temporary solution, buying some time to figure out a proper solution and then enable apparmor again.

Even though the problem shows up when starting a docker container, it also triggers if you just do in the LXC container: file /dev/mqueue and that returns a permission denied.

The /dev/mqueue shows up when you do an "ls /dev" with it showed like this:

Code:
d?????????  ? ?    ?          ?            ? mqueue

It seems related to items being discussed here: https://gitlab.com/apparmor/apparmor/-/issues/362 but I am wondering how much is still relevant for Trixie.
 
It's related to apparmor not working as expected for /dev/mqueue
<snip>
It seems related to items being discussed here: https://gitlab.com/apparmor/apparmor/-/issues/362 but I am wondering how much is still relevant for Trixie.

Thanks, this makes absolute sense since lxc utilices AppArmor ( combined with other stuff) to ensure that a bad actor can't break out of a container and make trouble on the host.

I feel confirmed in my opinion on docker inside lxcs though: Imho it's not acceptable to disable an essential security component to make things work.

So I stick with my recommendation to @quanto11 to migrate their docker workloads to a lightweight VM
 
  • Like
Reactions: SInisterPisces
This is just a bug that shows up with testing a beta of both Promox and Debian. It's raised here and hopefully someone can confirm it's to be fixed upstream or in Proxmox before 9.0 is released as a stable. There are sufficient warnings in the announcement related to use of beta.

This LXC + Docker set up worked fine during 7 -> 8 migration and also during upgrades of both host + guest to latest release, I take it's a matter of time before this works again. Not all Docker containers require /dev/kqueue access, so in my case I had a few running just fine and 2 failing to start.

But if you're running a beta version this can happen and, if well prepared, you can apply a few temporary or permanent solutions each with pros and cons like restoring a backup of the host to Proxmox 8.4, disable apparmor, search online for other solutions or use your suggestion.
 
The /dev/mqueue shows up when you do an "ls /dev" with it showed like this:
FYI, I can see that in a Debian bookworm based container, but it seems alright in a Debian Trixie based one, at least an ls /dev/mqueue works there.

It seems related to items being discussed here: https://gitlab.cThatom/apparmor/apparmor/-/issues/362 but I am wondering how much is still relevant for Trixie.
That should have been fixed by https://gitlab.com/apparmor/apparmor/-/merge_requests/1197 which is included in the 4.1.0 version that Trixie / PVE 9 ships.

FWIW, there where some other mqueue releated issues, like https://gitlab.com/apparmor/apparmor/-/merge_requests/1277 but that was also released already with version 4.0.2.

It could be that we might need to fix something in the generated profile on our end to cope with older distro releases in the CT, so please open an issue over at https://bugzilla.proxmox.com/ including a link to this thread so that we do not forget this.
 
I upgraded my Bookwork LXC to Trixie, but I still get the error:

Code:
root@airconnect:~# docker start airconnect
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "mqueue" to rootfs at "/dev/mqueue": change mount propagation through procfd: resolving path inside rootfs failed: lstat /var/lib/docker/overlay2/210ac85bcd4cdeda9e883df1e8510a7681979c8a9b621c5193031ceee8f5b452/merged//dev/mqueue: permission denied: unknown
 
I upgraded my Bookwork LXC to Trixie, but I still get the error:

Code:
root@airconnect:~# docker start airconnect
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "mqueue" to rootfs at "/dev/mqueue": change mount propagation through procfd: resolving path inside rootfs failed: lstat /var/lib/docker/overlay2/210ac85bcd4cdeda9e883df1e8510a7681979c8a9b621c5193031ceee8f5b452/merged//dev/mqueue: permission denied: unknown
You can add an "lxc.apparmor.profile: unconfined" to the .conf files in /etc/pve/lxc . But be aware that this disables all apparmour security features for the lxc. Docker will work again.
 
Hmm ok I also changed the features to "features: fuse=1,mknod=1,nesting=1,keyctl=1". With that it works for me now.
 
My Docker containers wouldn´t start in the LXCs (they are debian LXCs). I updated the sources in the LXCs to trixie and updated. Then changed the .confs of the LXCs to include the features line and the apparmour unconfined settings. Restarted the LXCs and docker started working again and still is working. Sorry I have no idea what else to do.
 
  • Like
Reactions: jsterr
Hmm ok I also changed the features to "features: fuse=1,mknod=1,nesting=1,keyctl=1". With that it works for me now.
Did you just turn them on all or one by one and all were really required?

I'd think that nesting was already enabled as otherwise it would not have worked previously.
The mknod one sounds a bit like the one that was required as additional one, fuse might be too, keyctl should not be required for this, but it is quite useful to have nowadays and should not hurt.

In any way, thanks for posting your solution here, it might well be permanently required and is definitively safer than running the container in an unconfined apparmor profile, or turning off apparmor completely.
 
  • Like
Reactions: Johannes S
I added keyctl and the unconfined apparmour to the PVE 8 configs. The rest was turned on before. I tried only adding keyctl and not having the appamour thing, but that doesn´t work.
 
Maybe they are connected. My docker also failed with the permission problem for the mqueue. With the settings now it does not.
 
My Docker containers wouldn´t start in the LXCs (they are debian LXCs). I updated the sources in the LXCs to trixie and updated. Then changed the .confs of the LXCs to include the features line and the apparmour unconfined settings. Restarted the LXCs and docker started working again and still is working. Sorry I have no idea what else to do.

Thanks I also needed to restart docker via systemctl restart docker on every boot. If I dont, docker does not work on my side.