Apparmor denies LXC startup operations from only certain containers.

Taylor Murphy

New Member
Apr 15, 2018
12
1
3
31
I have been using the supplied templates (pveam downloads) for all of my containers and they are mostly built from the Ubuntu 17.10 template, though I have 2 that are built from the Ubuntu 16.04 template. The LXC containers built from the 16.04 template start just fine and have no issues with apparmor in my dmesg readout. The 17.10 template, however, is causing me to continuously get these errors when starting up the containers:

Code:
[Sun Apr 15 14:12:29 2018] audit: type=1400 audit(1523815950.576:38): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default-cgns" name="/sys/fs/cgroup/unified/" pid=17297 comm="systemd" fstype="cgroup2" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
[Sun Apr 15 14:12:29 2018] audit: type=1400 audit(1523815950.614:39): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-cgns" name="/" pid=17491 comm="(networkd)" flags="rw, rslave"
[Sun Apr 15 14:12:29 2018] vmbr0: port 8(veth107i0) entered blocking state
[Sun Apr 15 14:12:29 2018] vmbr0: port 8(veth107i0) entered forwarding state
[Sun Apr 15 14:12:29 2018] audit: type=1400 audit(1523815950.652:40): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-cgns" name="/" pid=17532 comm="(resolved)" flags="rw, rslave"

When setting up my LXC templates for the Ubuntu 17.10 containers, I made one initially and got some of the initial redundant setup out of the way and then set that to "template" mode in the GUI/ After that, any time I want to make a new container I just do a full clone of that template container I originally made. Note that I do a full clone and not a linked clone so there shouldn't be any mounting from the clone to the original (unless I am mistaken on how that works compared to linked clones).

I only noticed that this was happening because of a strange thing that happened. My setup was feeling sluggish so I checked htop and saw that my CPU was under 10% usage, RAM was about 30%, IO was low, but my system load was over 20.00 (my system is a single node, single socket 4 core 8 thread Skull Canyon NUC) and I couldn't figure out why. When I rebooted the system, it took about 15 minutes to reboot and then none of my containers would start up, constantly throwing errors about cgroup not having a cpuset or a couple of other errors. I found a few old threads that helped me get them back on but now I'm wondering if this issue with apparmor is maybe limiting my containers from utilizing resources and that's why the CPU/IO was so low and yet the load was way over threshold?

I would also like to note that my 2 VMs worked flawlessly throughout this and haven't had any apparmor issues or difficulty accessing resources, so I know that this issue is contained to LXC containers (which I have 8 of currently).

Here is the readout for pveversion -v:
Code:
proxmox-ve: 5.1-42 (running kernel: 4.13.16-2-pve)
pve-manager: 5.1-49 (running version: 5.1-49/1e427a54)
pve-kernel-4.13: 5.1-44
pve-kernel-4.13.16-2-pve: 4.13.16-47
pve-kernel-4.13.16-1-pve: 4.13.16-46
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve3
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-30
libpve-guest-common-perl: 2.0-14
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-18
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-2
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-14
pve-cluster: 5.0-24
pve-container: 2.0-21
pve-docs: 5.1-17
pve-firewall: 3.0-7
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-4
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-2
qemu-server: 5.0-24
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.7-pve1~bpo9

And for pveperf:
Code:
CPU BOGOMIPS:      41472.00
REGEX/SECOND:      3242720
HD SIZE:           156.96 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     853.07
DNS EXT:           337.08 ms
DNS INT:           335.74 ms (xxxxxx.com)
 
the log snippet you posted does not show any error. please include the configs of the failing containers, and full logs (or at least logs containing error messages).
 
I apologize for taking so long to reply, a little more googling gave me the answer to silence the messages. I believe that they were due to the fact that i created a template of a general Ubuntu 17.10 container and made clones from it, so the file systems were still linked and that was causing a flag to go up in apparmor. I plan on replacing the clones with standalone containers when i have more time, but adding this line to the LXC config works as a temporary fix until i find the time to deal with it properly.

Code:
lxc.apparmor.profile = unconfined
 
I apologize for taking so long to reply, a little more googling gave me the answer to silence the messages. I believe that they were due to the fact that i created a template of a general Ubuntu 17.10 container and made clones from it, so the file systems were still linked and that was causing a flag to go up in apparmor. I plan on replacing the clones with standalone containers when i have more time, but adding this line to the LXC config works as a temporary fix until i find the time to deal with it properly.

Code:
lxc.apparmor.profile = unconfined

this removes a lot of the security protections in place - this is definitely not something you want to do unless you only run "trusted" code inside your container (keep in mind that a "trusted" network accessible service can quickly become untrusted!)
 
this removes a lot of the security protections in place - this is definitely not something you want to do unless you only run "trusted" code inside your container (keep in mind that a "trusted" network accessible service can quickly become untrusted!)
Too true and the reason why I have not been running docker in lxc. @fabian is it possible to run a more narrow apparmor profile, eg https://github.com/lxc/lxc/blob/master/config/apparmor/profiles/lxc-default-with-nesting ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!