PCT ENTER error - Failed to write AppArmor profile

yswery

Well-Known Member
May 6, 2018
83
5
48
54
Hey everyone

We have a 3 node cluster and for some reason two of the three nodes are refusing to enter any of the LXC containers with the pct enter XXXX command

Code:
root @ pveNode1 ➜  /  pct status 565
status: running

root @ pveNode1 ➜  /  pct config 565
arch: amd64
cores: 1
description: FooBar
features: fuse=1,mknod=1,mount=nfs;cifs,nesting=1
hostname: some.hostname
memory: 512
net0: name=eth0,bridge=vmbr0,gw=10.0.0.1,hwaddr=DA:CC:D8:99:D4:65,ip=10.0.0.123/24,type=veth
onboot: 1
ostype: ubuntu
rootfs: SSD_STORAGE_NFS:565/vm-565-disk-0.raw,size=30065M
swap: 0

root @ pveNode1 ➜  /  pct enter 565
lxc-attach: 565: lsm/apparmor.c: apparmor_process_label_set_at: 1183 Operation not permitted - Failed to write AppArmor profile "lxc-565_</var/lib/lxc>//&:lxc-565_<-var-lib-lxc>:unconfined" to 4
lxc-attach: 565: attach.c: do_attach: 1375 Failed to attach to container

But the weirdest part is if we migrate the container (its on shared storage) to the different node it enters the CT all fine without any errors.

I went through all the LXC common configs and they are identical on both working PVE node and non working PVE node. Does anyone know where I might be able to look at why this is happening and better yet, how to fix it?


EDIT: this error is is happening on all containers even fresh ones. not just one specific container
 
Last edited:
hi,

o of the three nodes are refusing to enter any of the LXC containers with the pct enter XXXX command
please post pveversion -v output from your nodes. ideally they should be all matching. please post one from the working node and one from the non-working

But the weirdest part is if we migrate the container (its on shared storage) to the different node it enters the CT all fine without any errors.

the error message:
Code:
lxc-attach: 565: lsm/apparmor.c: apparmor_process_label_set_at: 1183 Operation not permitted - Failed to write AppArmor profile "lxc-565_</var/lib/lxc>//&:lxc-565_<-var-lib-lxc>:unconfined" to 4
looks like apparmor profile is unconfined? did you perhaps edit the default apparmor configuration on that node?
I went through all the LXC common configs and they are identical on both working PVE node and non working PVE node. Does anyone know where I might be able to look at why this is happening and better yet, how to fix it?
you could compare the files in /etc/apparmor.d/lxc/ directory on both nodes
 
@oguz

From the node that can enter CT:


Code:
root @ WORKINGNODE ➜  ~  pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.11.22-4-pve)
pve-manager: 7.1-8 (running version: 7.1-8/5b267f33)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.11: 7.0-10
pve-kernel-5.4: 6.4-5
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.13.19-1-pve: 5.13.19-3
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-4-pve: 5.11.22-9
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.4.60-1-pve: 5.4.60-2
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-1
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-3
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-4
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3


From the node that CANT enter CT:

Code:
root @ NONWORKING-NODE ➜  ~  pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.11.22-2-pve)
pve-manager: 7.1-8 (running version: 7.1-8/5b267f33)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.11: 7.0-10
pve-kernel-5.4: 6.4-5
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.13.19-1-pve: 5.13.19-3
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-2-pve: 5.11.22-4
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-1
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-3
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-4
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3


----

you could compare the files in /etc/apparmor.d/lxc/ directory on both nodes
I doubled checked and the contents in `/etc/apparmor.d/lxc/` are identical in both nodes. You did mention that the error show means the CT doesnt have an apparmor profile is unconfined, the the thing that gets me is the CT boots and operates all fine. just `pct enter XXX` is the thing that triggers this error
 
Code:
proxmox-ve: 7.1-1 (running kernel: 5.11.22-2-pve)
pve-manager: 7.1-8 (running version: 7.1-8/5b267f33)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.11: 7.0-10
pve-kernel-5.4: 6.4-5
pve-kernel-5.13.19-2-pve: 5.13.19-4

have you rebooted your nodes after the upgrades? (i ask since you're running an older kernel than the latest one installed, idk if that's intentional)

it would also be interesting to see your journalctl when you're trying to enter the container with pct (you can just run journalctl -f and in a different terminal attach to the container)
 
Last edited:
have you rebooted your nodes after the upgrades? (i ask since you're running an older kernel than the latest one installed, idk if that's intentional)
This is not intentional at all. even now after a reboot the same issue is still happening that we can not enter any CT.

While looking at the journalctl nothing pops up at all. Some (totally unrelated) SNMP stuff comes up but thats not timed with me trying to issue the PCT command.

Is there anywhere else we can see more info or logging about the error? or potentially a way to bypass pct and try to attach to a LXC container to try remove the wrapper and see if we get more verbose errors?
 
While looking at the journalctl nothing pops up at all. Some (totally unrelated) SNMP stuff comes up but thats not timed with me trying to issue the PCT command.
snmp? we don't have that enabled by default on PVE installations... did you set that up yourself?

This is not intentional at all.
then maybe you should change the default kernel you're booting to [0].

for grub you'll need to modify the GRUB_DEFAULT to choose the kernel from the menu (there are some posts in the forum explaining the process [1])

[0]: https://pve.proxmox.com/wiki/Host_Bootloader
[1]: https://forum.proxmox.com/threads/revert-to-prior-kernel.100310/#post-434580
 
then maybe you should change the default kernel you're booting to [0].
Sorry I should have been more clear, after reboot the PVE node is running the latest kernal (simple before I did a dist-upgrade and did not do a reboot hence you saw installed kernel not the same as running one)

snmp? we don't have that enabled by default on PVE installations
Should have clarified this also, yes we installed snmp on the node, and what I wanted to say is that nothing related to the `PCT enter` shows up in journalctl.


I am some what stuck to what I should try to do next about not being able to enter any containers (meaning something is wrong), would opening a subscription ticket with the proxmox team be the next steps we should take?
 
I have a node with 3 containers. Only one of them returned the same error on pct enter
Code:
root@node:~# pct enter 109
lxc-attach: 109: conf.c: userns_exec_minimal: 5189 Device or resource busy - Running parent function failed
root@pm1:~# lxc-attach: 109: attach.c: do_attach: 1237 No data available - Failed to receive lsm label fd
lxc-attach: 109: attach.c: do_attach: 1375 Failed to attach to container
Rebooting this specific container seems to have solved the issue (without rebooting the node).

pve-manager/7.1-7/df5740ad (running kernel: 5.13.19-2-pve)

Regards
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!