[SOLVED] Proxmox 6 - Privileged LXC container all processes are inactive

mdub88

New Member
Mar 19, 2020
5
0
1
30
Hi,

My host is Proxmox 6.1-8 and I've setup a privileged LXC (debian 10) container in which I want to run an NFS sever. The container's features are :

Code:
features: fuse=1,mount=nfs;nfs;cifs;nfs;cifs;nfs;cifs,nesting=1

On first launch I installed nfs-kernel-server and it could run however, once I restarted the container, all processes inside the container are in an inactive state. Here's nfs-kernel-server's example :

Code:
● nfs-server.service - NFS server and services
   Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
   Active: inactive (dead)

When I run the container debugging I find the following error which seems to indicate it's apparmor related :

Code:
lxc-start 109 20200319183131.848 ERROR    conf - conf.c:lxc_setup_boot_id:3527 - Permission denied - Failed to mount /dev/.lxc-boot-id to /proc/sys/kernel/random/boot_id

Furthermore, if I run dmesg -T inside the container I find more apparmor denied errors :

Code:
[Thu Mar 19 18:29:15 2020] audit: type=1400 audit(1584642542.625:155): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="/usr/bin/lxc-start" name="/proc/sys/kernel/random/boot_id" pid=10954 comm="lxc-start" srcname="/dev/.lxc-boot-id" flags="rw, bind"
[Thu Mar 19 18:29:16 2020] audit: type=1400 audit(1584642542.645:156): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-with-nfsd" name="/sys/fs/cgroup/unified/" pid=10954 comm="systemd" fstype="cgroup2" srcname="cgroup2" flags="rw, nosuid, nodev, noexec"
[Thu Mar 19 18:29:16 2020] audit: type=1400 audit(1584642542.645:157): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-with-nfsd" name="/sys/fs/cgroup/unified/" pid=10954 comm="systemd" fstype="cgroup2" srcname="cgroup2" flags="rw, nosuid, nodev, noexec"

I have, however, tried to add

Code:
lxc.apparmor.profile: unconfined

and even

Code:
lxc.apparmor.profile: unchanged

to /etc/pve/lxc/myid.conf to no avail.

Can anyone help me ?
 
Just realized there may have been an error with the following line in /etc/pve/lxc/myid.conf :

Code:
features: fuse=1,mount=nfs;nfs;cifs;nfs;cifs;nfs;cifs,nesting=1

which I changed to :

Code:
features: fuse=1,mount=nfs;cifs,nesting=1

However, nothing's changed.
 
I found the issue !

I was misled by dmesg, apparmor wasn't the cause of the problem, rather, it came from the lxc template I was using.

The 'debian-10.0-standard_10.0-1_amd64.tar.gz' just doesn't work past a reboot of the container.

I'm running a container from the ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz template and everything works even after a reboot.

I don't know what's wrong with the debian template but using the ubuntu one is good enough for me. Problem solved, then.
 
hello guys
I have a serious issue installing my proxmox 6.1 I install quite all right on the dev/sva pertition of 120 GB remaining a partition of about 520gb on the four modular server but funny enough after clustering in trying to create my VM there was no node for me to create the vm its saying no unused storage and when I checked the 120 gb says remaining 14GB while the 520GB says 97% used remaining 16GB please kindly assist me here guys
 
I found the issue !

I was misled by dmesg, apparmor wasn't the cause of the problem, rather, it came from the lxc template I was using.

The 'debian-10.0-standard_10.0-1_amd64.tar.gz' just doesn't work past a reboot of the container.

I'm running a container from the ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz template and everything works even after a reboot.

I don't know what's wrong with the debian template but using the ubuntu one is good enough for me. Problem solved, then.

Hi,
I have tryed with the same Ubuntu 18.04 template, but nfs server doesn't work.
Can I have your file config (/etc/pve/lxc/id.conf) ?
 
Here you go :

Code:
arch: amd64
cores: 1
features: fuse=1,mount=nfs;cifs;nfs;cifs,nesting=1
hostname: myvm0
memory: 512
mp0: /srv/vm/storage,mp=/media
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=xx:xx:xx:xx:xx:xx,ip=dhcp,ip6=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: local:100/vm-100-disk-0.raw,size=20G
startup: order=1
swap: 512
unused0: storage:100/vm-100-disk-0.raw

I don't know why there are multiple "nfs" and "cifs" in this line. I haven't touched the conf of this container since the previous issue was solved. I think it's the gui which adds this but, hey ! It works.

Code:
features: fuse=1,mount=nfs;cifs;nfs;cifs,nesting=1
 
Here you go :

Code:
arch: amd64
cores: 1
features: fuse=1,mount=nfs;cifs;nfs;cifs,nesting=1
hostname: myvm0
memory: 512
mp0: /srv/vm/storage,mp=/media
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=xx:xx:xx:xx:xx:xx,ip=dhcp,ip6=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: local:100/vm-100-disk-0.raw,size=20G
startup: order=1
swap: 512
unused0: storage:100/vm-100-disk-0.raw

I don't know why there are multiple "nfs" and "cifs" in this line. I haven't touched the conf of this container since the previous issue was solved. I think it's the gui which adds this but, hey ! It works.

Code:
features: fuse=1,mount=nfs;cifs;nfs;cifs,nesting=1

Thank you.
So you don't use etc/apparmor.d/lxc/lxc-default-with-nfsd ?
I dont' see "unprivileged" , you have delete this line ?

I try with same file as you, exept "mp0: /srv/vm/storage,mp=/media" and always :
systemctl restart nfs-kernel-server.service
A dependency job for nfs-server.service failed. See 'journalctl -xe' for details.
-> Failed to mount RPC Pipe File System.

Is "mp0: /srv/vm/storage,mp=/media" used for nfs-server ?
 
Last edited:
So you don't use etc/apparmor.d/lxc/lxc-default-with-nfsd ?

No, it uses the default apparmor profile for any lxc container.

I dont' see "unprivileged" , you have delete this line ?

I don't remember seeing this line in my config file before. To be clear, my container is privileged, I couldn't setup an nfs server in an unprivileged container.

Failed to mount RPC Pipe File System.

Are any other service working in your container? If yes, this is a different issue from what I experienced.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!