New-ish to LXC, and AppArmor is driving me crazy

tycoonbob

Member
Aug 25, 2014
67
0
6
I'm having issues with a few apps running within LXC. For example, I have syslog container running syslog-ng 3.7. For whatever reason, it's not writing that it should be catching from other Linux systems (other LXC's, some QEMU/KVM instances, and some physical hardware boxes). Another issue, my LibreNMS instance no longer allows me to log on. I see some audit messages about AppArmor denies with Nginx and php-fpm...so I can't help but wonder if those are related. I'm 95% sure that that I was able to log in before updating from PVE 4.2-2 to 4.2-5, but I know none of the configuration on the LibreNMS box changed. I may be having other issues that I'm just not aware of yet, but I am seeing AppArmor denies in my audit message on my PVE host, so I'm hoping to get some guidance before I end up ditching LXC completely and going back to QEMU/KVM.

FYI, all my LXC's are CentOS 7 based.

Code:
root@jormungandr:~# pveversion -v
proxmox-ve: 4.2-51 (running kernel: 4.4.8-1-pve)
pve-manager: 4.2-5 (running version: 4.2-5/7cf09667)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.8-1-pve: 4.4.8-51
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-75
pve-firmware: 1.1-8
libpve-common-perl: 4.0-62
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-17
pve-container: 1.0-64
pve-firewall: 2.0-27
pve-ha-manager: 1.0-31
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie
openvswitch-switch: 2.5.0-1


Code:
root@jormungandr:~# cat /etc/apparmor.d/lxc/lxc-default
# Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles under /etc/apparmor.d/lxc

profile lxc-container-default flags=(attach_disconnected,mediate_deleted) {
  #include <abstractions/lxc/container-base>

  # the container may never be allowed to mount devpts.  If it does, it
  # will remount the host's devpts.  We could allow it to do it with
  # the newinstance option (but, right now, we don't).
  deny mount fstype=devpts,

  # allow nfs mount everywhere
  mount fstype=rpc_pipefs,
  mount fstype=nfs,
}


Code:
root@jormungandr:~# cat /var/log/messages | grep audit
May 24 17:31:44 jormungandr kernel: [77599.926707] audit: type=1400 audit(1464111104.874:112): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default" name="/sys/fs/cgroup/" pid=13377 comm="systemd" flags="ro, nosuid, nodev, noexec, remount, strictatime"
May 24 17:31:53 jormungandr kernel: [77608.139835] audit: type=1400 audit(1464111113.090:113): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default" name="/sys/fs/cgroup/" pid=16939 comm="systemd" flags="ro, nosuid, nodev, noexec, remount, strictatime"
[...]
May 25 16:40:04 jormungandr kernel: [160902.974619] audit: type=1400 audit(1464194404.838:122): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default" name="/sys/fs/cgroup/" pid=4463 comm="systemd" flags="ro, nosuid, nodev, noexec, remount, strictatime"
May 25 16:40:04 jormungandr kernel: [160903.107811] audit: type=1400 audit(1464194404.970:123): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default" name="/" pid=4723 comm="mount" flags="rw, remount"
May 25 16:40:05 jormungandr kernel: [160903.427819] audit: type=1400 audit(1464194405.290:124): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default" name="/" pid=4948 comm="(php-fpm)" flags="rw, rslave"
May 25 16:40:05 jormungandr kernel: [160903.461120] audit: type=1400 audit(1464194405.322:125): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default" name="/" pid=5004 comm="(e-db-dir)" flags="rw, rslave"
May 25 16:40:05 jormungandr kernel: [160903.507475] audit: type=1400 audit(1464194405.370:126): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default" name="/" pid=5094 comm="(rm)" flags="rw, rslave"
May 25 16:40:05 jormungandr kernel: [160903.603420] audit: type=1400 audit(1464194405.466:127): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default" name="/" pid=5278 comm="(nginx)" flags="rw, rslave"
May 25 16:40:05 jormungandr kernel: [160903.703450] audit: type=1400 audit(1464194405.566:128): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default" name="/" pid=5418 comm="(nginx)" flags="rw, rslave"
May 25 16:40:05 jormungandr kernel: [160903.791321] audit: type=1400 audit(1464194405.654:129): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default" name="/" pid=5483 comm="(qld_safe)" flags="rw, rslave"
May 25 16:40:05 jormungandr kernel: [160903.793268] audit: type=1400 audit(1464194405.654:130): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default" name="/" pid=5492 comm="(it-ready)" flags="rw, rslave"
May 25 21:31:28 jormungandr kernel: [178387.266210] audit: type=1400 audit(1464211888.478:131): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default" name="/sys/fs/cgroup/" pid=8282 comm="systemd" flags="ro, nosuid, nodev, noexec, remount, strictatime"
May 25 21:31:28 jormungandr kernel: [178387.428507] audit: type=1400 audit(1464211888.638:132): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default" name="/" pid=8555 comm="mount" flags="rw, remount"
May 25 21:35:41 jormungandr kernel: [178640.523033] audit: type=1400 audit(1464212141.728:133): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default" name="/sys/fs/cgroup/" pid=22742 comm="systemd" flags="ro, nosuid, nodev, noexec, remount, strictatime"
May 25 21:35:41 jormungandr kernel: [178640.671913] audit: type=1400 audit(1464212141.876:134): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default" name="/" pid=22925 comm="mount" flags="rw, remount"


Thing is, I don't see these AppArmor denies happening when I try to do something that I think is broken (i.e., log into the webUI on my LibreNMS instance). Why am I getting these denies? How can I stop them? What are some best practices for running CentoS 7 LXC's on a Debian 8 (PVE 4.2) host? Seeing denies for systemd, mount, php-fpm, httpd, kill, rm, e-db-dir, nginx, it-ready, and others. What do all these mean inside these logs??

Inside one of my my LXC containers, specifically the librenms instance, /var/log/message gets flooded with messages like this:
Code:
May 26 13:35:01 nms systemd: Started Session c1317 of user root.
May 26 13:35:01 nms systemd: Starting Session c1317 of user root.
May 26 13:35:01 nms systemd: Started Session c1318 of user root.
May 26 13:35:01 nms systemd: Starting Session c1318 of user root.
May 26 13:35:01 nms systemd: Started Session c1319 of user root.
May 26 13:35:01 nms systemd: Starting Session c1319 of user root.
May 26 13:36:01 nms systemd: Started Session c1320 of user root.
May 26 13:36:01 nms systemd: Starting Session c1320 of user root.
May 26 13:37:01 nms systemd: Started Session c1321 of user root.
May 26 13:37:01 nms systemd: Starting Session c1321 of user root.
May 26 13:38:01 nms systemd: Started Session c1322 of user root.
May 26 13:38:01 nms systemd: Starting Session c1322 of user root.
May 26 13:39:01 nms systemd: Started Session c1323 of user root.
May 26 13:39:01 nms systemd: Starting Session c1323 of user root.

But I don't get those messages on my QEMU/KVM instances running on the same PVE host.

I know I'm not the only person running into weird LXC issues with CentOS containers. I know I have a lack of understanding of AppArmor (I keep wanting to compare it to SELinux) but I'd really like to get my LXC stuff working properly (I really like the control of resources with LXC over QEMU/KVM, and how well the underlying storage seems to work with ZFS, etc).

Any guidance is greatly appreciated!!!
 
@bodysplit, thanks for the feedback. I'm glad I'm not alone, but the lack of solutions is really starting to force me back to QEMU/KVM, which I don't want to.

Another anecdotal "hiccup", I have a small container running Deluge (under CentOS 7) that I went to check on this morning. There were no connections at all, which is very odd. In an attempt to reboot the instance to see if that helps, I ssh in, issue a reboot command, and now it's hanging. "/var/log/lxc/111.log" is showing that the container failed to start, but yet it's still running. I can't start it in the foreground to get any messages, because I can't stop it. It was working just fine last night, and now it's not. Nothing in /var/log/messages, either...so right now I have no idea what happened and I can't stop it to find out.


As far as your link, I'm seriously considering disabling AppArmor to see if that helps. I'm running in a lab environment; nothing prod, so at this point I'm not too concerned. Am I correct in understanding that I can disable AA on a per-container basis, by just adding `lxc.aa_profile: unconfined` to the `/etc/pve/nodes/serverA/lxc/111.conf` file?
 
Last edited:
almost the same problem here, and that drive me crazy also.

I'm just a super newb, because I didn't install our actual configuration, and I have no Idea how to proceed for disabled this apparmor actually. I will look at bodysplit link to see if that helps me a bit...
 
Dunno, if you found the solution now, but for the ones landing here, you can disable Apparmor like discribed here:
https://pve.proxmox.com/wiki/Linux_Container

via editing /etc/pve/lxc/CTID.conf

and set

lxc.aa_profile = unconfined

EDIT: the container wouldn't start with this option, I had to use:

lxc.apparmor.profile = unconfined
 
Last edited:
Using the option:
lxc.apparmor.profile = unconfined

did not work for NFS. Any other suggestions?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!