lxc apparmor="DENIED" operation="mount" error=-13

Xilmen

Active Member
Mar 30, 2017
26
0
41
33
Hello,

I look at my logs, and I notice that. I have a problem on one of my LXC container (he host Openvpn)

Code:
audit: type=1400 audit(1502146004.740:77): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-cgns" name="/" pid=15472 comm="(openvpn)" flags="rw, rslave"

Can you enlighten me on this error ?

Thank you ! :)

lxc config :

Code:
arch: amd64
cores: 1
hostname: CT-Openvpn
memory: 512
net0: name=eth0,bridge=vmbr0,gw=192.168.X.X,hwaddr=XX:XX:XX:XX:XX:XX,ip=192.168.XX.XX/24,type=veth
onboot: 1
ostype: centos
rootfs: RAIDX:212/vm-212-disk-2.raw,size=10G
swap: 512
lxc.cgroup.devices.allow: c 10:200 rwm
lxc.hook.autodev: sh -c "modprobe tun; cd ${LXC_ROOTFS_MOUNT}/dev; mkdir net; mknod net/tun c 10 200; chmod 0666 net/tun"
 
The error tells you that apparmor denied a mount operation of the container. I suspect you inspired your setup from heider.io. However containers share the host's kernel and aren't allowed to load kernel modules. The autodev hook is run after the container was loaded and its /dev has been populated and is also run inside the container, so it won't be able to load the module. You can manually load the module on the host before starting the container. If that works as expected loading the module from lxc.hook-pre-start should do the trick.
 
Thank's, it's work with lxc.hook.pre-start (not lxc.hook-pre-start)

Have a good day ! :)
 
Hi, we seem to have something very similar here on a debian jessie lxc guest with ISPConfig:

Code:
apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-cgns" name="/" pid=7435 comm="mount" flags="rw, remount, relatime"

We allready added this line to the PVE hosts /etc/apparmor.d/lxc/lxc-default-cgns:
Code:
  mount options=(rw, nosuid, noexec, remount, relatime, ro, bind),
followed by an apparmor restart
but that error still persists as it seems...
Problem is, that we suspect this to be a reason for our LXC-VM to freeze from time to time.
This is a VM we migrated from proxmox 3.2 (OpenVZ) to our new Proxmox 5 Host.
We never had any problems with this on OpenVZ.
Any help is highly appreciated

thanks
Sascha
 
Hi,
as to where and how to set this hook.
In the container config file '/etc/pve/lxc/CTID.conf', like any other lxc.* option.

However,
this shouldn't be needed anymore to get openvpn running? At least in my tests with the default privileged/unprivileged Debian 9 containers it worked out-of-the-box just after `apt install openvpn` and `systemctl start openvpn`.

What kind of problem are you having?
 
Have been researching this dmesg:
[ 601.114607] audit: type=1400 audit(1554903179.494:31): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-105_</var/lib/lxc>" name="/sys/fs/pstore/" pid=32002 comm="mount" flags="rw, remount"
I think it is related to my container 100 having a mountpoint set to a ZFS file system of the server.
 
Have been researching this dmesg:
[ 601.114607] audit: type=1400 audit(1554903179.494:31): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-105_</var/lib/lxc>" name="/sys/fs/pstore/" pid=32002 comm="mount" flags="rw, remount"
I think it is related to my container 100 having a mountpoint set to a ZFS file system of the server.

May or may not be related to the problem on this thread. AppArmor will give similar errors with most denied mounts. If you want, you can open a new topic with more information about your setup (CT config, full debug logs, PVE version information etc.) and we can help you.
 
Dear,

this must been a LONG-TIME-BUG in Proxmox! I have see more times IN CONTAINERS and not in all - over many years and versions and again today in the newest PVE Proxmox - Version 6.0.5 - Linux <my host> 5.0.18-1-pve #1 SMP PVE 5.0.18-1 (Wed, 24 Jul 2019 08:13:30 +0200) x86_64 GNU/Linux!

Host and container are running now under Debian Buster (newest version) / zfs

When I see this log entries in syslog in a container, then they are NOT exists in the syslog on the host! So this looks like, that container get secret informations of other containers (kernel panic or others), what the host not get! This cant been a feature!

Expl this are errors produced in other containers and logged NOT in the host, but in one of the container (359)!
BIG QUESTION: So why the container 359 get the infos of kernel from other containers (here 2239, 2234, 434) ?


Aug 26 03:39:01 perco1 kernel: [1158651.262731] audit: type=1400 audit(1566787141.886:8167): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-2239_</var/lib/lxc>" name="/bin/" pid=18494 comm="(ionclean)" flags="ro, remount, noatime, bind"
Aug 26 08:39:01 perco1 kernel: [1176650.485852] audit: type=1400 audit(1566805141.375:8255): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-2234_</var/lib/lxc>" name="/bin/" pid=8121 comm="(ionclean)" flags="ro, remount, noatime, bind"
Aug 26 13:09:10 perco1 kernel: [1192858.845621] audit: type=1400 audit(1566821349.985:8333): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-434_</var/lib/lxc>" name="/" pid=15564 comm="(ionclean)" flags="rw, rslave"



Regards

Detlef
 
  • Like
Reactions: lps90
hi,

So this looks like, that container get secret informations of other containers (kernel panic or others), what the host not get! This cant been a feature!

this is actually normal, because lxc containers share the host kernel. therefore logs from the kernel are also shared.
 
only for information:
I create an an LXC Container with an Debian Bullseye "rootfs.tar.xz" file (OT: because of the backupPC Version 4)
At first as a privileged Container (Unprivileged container = no)
I get the same dmesg Error
Code:
apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-101_</var/lib/lxc>"
I make an backup and restore this LXC Container to an unprivileged Container (Unprivileged container = yes)
get the same Error
Now I set the Features:
Code:
 nesting=1
without any further dmesg Errors
And it works very well
I have reproduce it on two different Proxmox machines

regards,
maxprox
 
Last edited:
  • Like
Reactions: Merkurs
only for information:
I create an an LXC Container with an Debian Bullseye "rootfs.tar.xz" file (OT: because of the backupPC Version 4)
At first as a privileged Container (Unprivileged container = no)
I get the same dmesg Error
Code:
apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-101_</var/lib/lxc>"
I make an backup and restore this LXC Container to an unprivileged Container (Unprivileged container = yes)
get the same Error
Now I set the Features:
Code:
 nesting=1
without any further dmesg Errors
And it works very well
I have reproduce it on two different Proxmox machines

regards,
maxprox
+1 for the nesting... I upgraded to Proxmox 7 (on a native Buster install) from 6.4 successfully about a week ago. Next I upgraded a container to Bullseye, then saw the apparmor issues in syslog. Immediately changed the "Options --> Features" of the LXC to include "Nesting", and no more apparmor issues in syslog.
 
Yeah ... I too had those messages on pve 7 and CT that i just updated to debian 11.
Everything was fine before upgrading CT to debian 11.
After upgrade, those strange messages on ALL my CT.
Activated nesting on all and no more messages but ... mmm ... I don't really like it. They were not mounting anything special.

Anybody have an explanation on this that I can understand ?
Thanks
 
  • Like
Reactions: Helmut101
  • Like
Reactions: Helmut101
Ok thank you for this.
I think I understand :D
So it seems that with debian 11 (and maybe other), nesting option is mandatory but will not harm / make the host insecure.

Thanks !
 
  • Like
Reactions: Stoiko Ivanov

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!