nfs share problem

When re/starting containers I see some new things:

/var/lib/lxc/100/config contains config I've never seen before (automatically added on container restart):
Code:
lxc.apparmor.profile = generated
lxc.apparmor.raw = deny mount -> /proc/,
lxc.apparmor.raw = deny mount -> /sys/,

/var/cache/lxc/apparmor contains files for each container, like:
Code:
lxc-100_<-var-lib-lxc>

apparmor_status before restarting a container
Code:
apparmor module is loaded.                                                               
5 profiles are loaded.                     
5 profiles are in enforce mode.                                                                   
   /usr/bin/lxc-start                                                                                       
   lxc-container-default                                               
   lxc-container-default-cgns                                       
   lxc-container-default-with-mounting                   
   lxc-container-default-with-nesting
0 profiles are in complain mode.           
100 processes have profiles defined. 
100 processes are in enforce mode.                   
   /usr/bin/lxc-start (4162)                             
   lxc-container-default-cgns (485) 
   ... many more of the same for each process in the container

apparmor_status after restarting container
Code:
apparmor module is loaded.
6 profiles are loaded.
6 profiles are in enforce mode.
   /usr/bin/lxc-start
   lxc-100_</var/lib/lxc>
   lxc-container-default
   lxc-container-default-cgns
   lxc-container-default-with-mounting
   lxc-container-default-with-nesting
0 profiles are in complain mode.
100 processes have profiles defined.
100 processes are in enforce mode.
   /usr/bin/lxc-start (14769)
   lxc-100_</var/lib/lxc>//&:lxc-100_<-var-lib-lxc>:unconfined (14876)
   ... many more of the same for each process in the container

Not sure why it's not applying lxc-container-default-cgns anymore
 
So the new way to do this is using the features option in the pve container config.

I had to make a small patch to get rpc_pipefs support working:

Code:
--- a/PVE/LXC/Config.pm       2018-10-22 18:37:14.141835351 +0000
+++ b/PVE/LXC/Config.pm   2018-10-22 18:37:19.117868146 +0000
@@ -283,7 +283,7 @@
            ." permission of the devices cgroup, mounting an NFS file system can"
            ." block the host's I/O completely and prevent it from rebooting, etc.",
        format_description => 'fstype;fstype;...',
-       pattern => qr/[a-zA-Z0-9; ]+/,
+       pattern => qr/[a-zA-Z0-9_; ]+/,
     },
     nesting => {
        optional => 1,

patch that, and restart pvedaemon, pveproxy, pvestatd

replace any lxc.apparmor.profile option in the pct config with

Code:
features: mount=nfs4;nfs3;rpc_pipefs
 
Last edited:
  • Like
Reactions: Salzi
I have the same problem, but your solution is not working.

Oct 22 23:46:53 XXX kernel: [ 9971.012507] audit: type=1400 audit(1540244813.025:282): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-101_</var/lib/lxc>" name="/data/" pid=27395 comm="mount.nfs" fstype="nfs" srcname="10.10.42.210:/data/ownCloud"

Container 101 config-file:

arch: amd64
cores: 2
hostname: XXX
memory: 2048
nameserver: 9.9.9.9
net0: name=eth0,bridge=vmbr0,gw=10.10.42.1,hwaddr=82:4D:EF:7A:37:D0,ip=10.10.42.40/24,type=veth
onboot: 1
ostype: debian
rootfs: NAS:101/vm-101-disk-1.raw,size=11G
searchdomain: XXX
swap: 2048
unused0: local:101/vm-101-disk-1.raw
features: mount=nfs4;nfs3;rpc_pipefs
 
@h0tw1r3 Thank you. That worked for me!

@Lt.Cmdr.Data Look at the error message: fstype="nfs". You need to also add nfs to the feature.

Like:

Code:
features: mount=nfs;nfs4;nfs3;rpc_pipefs
 
  • Like
Reactions: halis
It's not working..

Container:
root@XXX:~# mount -v /data/
mount.nfs: timeout set for Tue Oct 23 09:29:06 2018
mount.nfs: trying text-based options 'vers=4.2,addr=10.10.42.210,clientaddr=10.10.42.40'
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting 10.10.42.210:/data/ownCloud

But the nfs permissions are OK, because it works without problems until I updated proxmox..

/var/log/messages (in container):
Oct 23 09:33:28 XXX kernel: [ 1780.702174] audit: type=1400 audit(1540280008.034:39): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-101_</var/lib/lxc>" name="/data/" pid=15468 comm="mount.nfs" fstype="nfs" srcname="10.10.42.210:/data/ownCloud"

/var/log/messages (on proxmox host):
Oct 23 09:32:34 utopia-planitia kernel: [ 1727.205795] audit: type=1400 audit(1540279954.541:38): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-101_</var/lib/lxc>" name="/data/" pid=15175 comm="mount.nfs" fstype="nfs" srcname="10.10.42.210:/data/ownCloud"

I even bootet the whole Server after the changes...
 
Please post your configuration (/etc/pve/lxc/101.conf) and the generated profile (/var/lib/lxc/101/config)

Just as an example:

One of my configs:

Code:
arch: amd64
cores: 4
hostname: data
memory: 8196
mp0: data:subvol-104-disk-1,mp=/mnt/data,size=3500G
net0: name=eth0,bridge=vmbr0,gw=192.168.178.1,hwaddr=96:0D:50:76:FC:AE,ip=192.168.178.104/24,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-104-disk-1,size=8G
swap: 1024
features: mount=nfs4;nfs3;nfsd;nfs;rpc_pipefs

generated profile:

Code:
lxc.arch = amd64
lxc.include = /usr/share/lxc/config/debian.common.conf
lxc.apparmor.profile = generated
lxc.apparmor.raw = deny mount -> /proc/,
lxc.apparmor.raw = deny mount -> /sys/,
lxc.apparmor.raw = mount fstype=nfs4,
lxc.apparmor.raw = mount fstype=nfs3,
lxc.apparmor.raw = mount fstype=nfsd,
lxc.apparmor.raw = mount fstype=nfs,
lxc.apparmor.raw = mount fstype=rpc_pipefs,
lxc.monitor.unshare = 1
lxc.tty.max = 2
lxc.environment = TERM=linux
lxc.uts.name = data
lxc.cgroup.memory.limit_in_bytes = 8594128896
lxc.cgroup.memory.memsw.limit_in_bytes = 9667870720
lxc.cgroup.cpu.shares = 1024
lxc.rootfs.path = /var/lib/lxc/104/rootfs
lxc.net.0.type = veth
lxc.net.0.veth.pair = veth104i0
lxc.net.0.hwaddr = 96:0D:50:76:FC:AE
lxc.net.0.name = eth0
lxc.cgroup.cpuset.cpus = 0-3
 
  • Like
Reactions: halis
Code:
root@utopia-planitia:~# cat /etc/pve/lxc/101.conf
arch: amd64
cores: 2
hostname: MVLyra-Cloud
memory: 2048
nameserver: 9.9.9.9
net0: name=eth0,bridge=vmbr0,gw=10.10.42.1,hwaddr=82:4D:EF:7A:37:D0,ip=10.10.42.40/24,type=veth
onboot: 1
ostype: debian
rootfs: NAS-MVLyra:101/vm-101-disk-1.raw,size=11G
searchdomain: mv-lyra.de
swap: 2048
unused0: local:101/vm-101-disk-1.raw
features: mount=nfs;nfs4;nfs3;rpc_pipefs


Code:
root@utopia-planitia:~# cat /var/lib/lxc/101/config
lxc.arch = amd64
lxc.include = /usr/share/lxc/config/debian.common.conf
lxc.apparmor.profile = generated
lxc.apparmor.raw = deny mount -> /proc/,
lxc.apparmor.raw = deny mount -> /sys/,
lxc.apparmor.raw = mount fstype=nfs4,
lxc.apparmor.raw = mount fstype=nfs3,
lxc.apparmor.raw = mount fstype=rpc_pipefs,
lxc.monitor.unshare = 1
lxc.tty.max = 2
lxc.environment = TERM=linux
lxc.uts.name = MVLyra-Cloud
lxc.cgroup.memory.limit_in_bytes = 2147483648
lxc.cgroup.memory.memsw.limit_in_bytes = 4294967296
lxc.cgroup.cpu.shares = 1024
lxc.rootfs.path = /var/lib/lxc/101/rootfs
lxc.net.0.type = veth
lxc.net.0.veth.pair = veth101i0
lxc.net.0.hwaddr = 82:4D:EF:7A:37:D0
lxc.net.0.name = eth0
lxc.cgroup.cpuset.cpus = 0,2


OK, thanks for the hint, there is no "nfs" in the generated profile. Can I force generation of this?
 
How did you restart your container after the changes? I think you have to use the pct command to generate the new profile:

Code:
pct stop 101
pct start 101
 
I shutdown the guest from it's own console (halt -p) and started it from the web gui and/or

Code:
lxc-start 101

I wil try with the pct command.
 
Please post your configuration (/etc/pve/lxc/101.conf) and the generated profile (/var/lib/lxc/101/config)

Just as an example:

One of my configs:

Code:
arch: amd64
cores: 4
hostname: data
memory: 8196
mp0: data:subvol-104-disk-1,mp=/mnt/data,size=3500G
net0: name=eth0,bridge=vmbr0,gw=192.168.178.1,hwaddr=96:0D:50:76:FC:AE,ip=192.168.178.104/24,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-104-disk-1,size=8G
swap: 1024
features: mount=nfs4;nfs3;nfsd;nfs;rpc_pipefs

........

@Salzi Do I understand it correctly, that this works only for priviledged containers?

Because I am missing the unpriviledged directive in your config, and I was not able to reproduce it on unpriviledged container (I am getting mount: Operation not permitted error).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!