apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-104_</var/lib/lxc>

glennbtn

Renowned Member
Mar 12, 2010
37
0
71
Hi All

We have a number of proxmox boxes and have a number of lxc's build on Ubuntu all exactly the same. 2 Random lxc's fall over approx every 7 days and they are on different boxes. The vm becomes unresponsive and we have to force a stop as you can't ssh on or use the console to get access,, then restart the lxc. After this it will be ok for another approx 7 days.

Looking at the proxmox stats the cpu usuage shoots right up from approx 10% too 100%. I then see this in the logs
Jun 9 11:39:01 server89 kernel: [3767073.156067] audit: type=1400 audit(1591699141.474:9793): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-104_</var/lib/lxc>" name="/" pid=6688 comm="(ionclean)" flags="rw, rslave"

After this even then there are other errors in the logs guessing related a few minutes later
Jun 9 11:46:26 server89 systemd[1]: Failed to reset devices.list on /system.slice/systemd-journal-flush.service: Operation not permitted
Jun 9 11:46:26 server89 systemd[1]: Starting Flush Journal to Persistent Storage...
Jun 9 11:46:26 server89 systemd[1]: Started Nameserver information manager.
Jun 9 11:46:26 server89 systemd[1]: Reached target Network (Pre).
Jun 9 11:46:26 server89 resolvconf[51]: /etc/resolvconf/update.d/libc: Warning: /etc/resolv.conf is not a symbolic link to /run/resolvconf/resolv.conf
Jun 9 11:46:26 server89 rsyslogd-2222: command 'KLogPermitNonKernelFacility' is currently not permitted - did you already set it via a RainerScript command (v6+ config)? [v8.16.0 try http://www.rsyslog.com/e/2222 ]
Jun 9 11:46:26 server89 rsyslogd: rsyslogd's groupid changed to 109
Jun 9 11:46:26 server89 rsyslogd: rsyslogd's userid changed to 104
Jun 9 11:46:26 server89 systemd[1]: Started Load/Save Random Seed.


Can anyone shed any light on this please, thanks
 
hi,

Jun 9 11:39:01 server89 kernel: [3767073.156067] audit: type=1400 audit(1591699141.474:9793): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-104_</var/lib/lxc>" name="/" pid=6688 comm="(ionclean)" flags="rw, rslave"
i don't think this is the cause of the crash but just a side-effect.
check here[0] to see the discussion, it seems like apparmor doesn't like phpsessionclean's mount

if you post:
* pveversion -v
* pct config CTID

then we can figure out why they're crashing

[0]: https://discuss.linuxcontainers.org/t/apparmor-denied-operation-mount/2424/6
 
Ok thanks oguz, bit of a bum steer then. Here is the info requested. I have edited the domain and ip

pveversion -v
proxmox-ve: 5.4-1 (running kernel: 4.15.18-12-pve)
pve-manager: 5.4-3 (running version: 5.4-3/0a6eaa62)
pve-kernel-4.15: 5.3-3
pve-kernel-4.15.18-12-pve: 4.15.18-36
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-10
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-52
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-43
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-37
pve-container: 2.0-39
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-2
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-52
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2

pct config 119
arch: amd64
cores: 4
description: 92.119.252.89%0A%0AWas 185.160.166.22 > server89.mydomain.co.uk%0A%0ANew Build%0A%0A%0AAht3Ahsu%0A
hostname: server89.mydomain.co.uk
memory: 8192
net0: name=eth0,bridge=vmbr0,firewall=1,gw=95.199.249.3,hwaddr=FA:CC:BD:A7:31:84,ip=95.199.249.89/32,type=veth
ostype: ubuntu
rootfs: local-lvm:vm-119-disk-0,size=500G
swap: 0
 
container config looks pretty standard,

however you're running PVE 5.4-1 which will be EOL soon (2020-07)

i suggest you update your system (which could end up solving the problem)

if you can't upgrade to 6.x then still you can update to the latest 5.x without breaking anything (apt update && apt full-upgrade followed by a reboot of the server)
 
ok will give that a try as looking at both the affected promox servers that have the issue are both on the same version

Many thanks
 
hi,

were you able to update the servers? did it fix your issue with the containers?
 
Hi Oguz

I have run the update on one of the server bringing it up to 5.4-15 It's only been 4 days and usually the vm screws up between 7-10 days so will report back when I know the outcome. Finger crossed though.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!