Is it possible to run a NFS server within a LXC?

luckyspiff

Renowned Member
Oct 3, 2015
12
7
68
My attempt to run a NFS server within a LXC Linux Container failed. I used the debian based TurnKey Linux fileserver template and tried to activate the NFS kernel server within its LXC (/etc/exports were already configured):

Code:
# /etc/init.d/nfs-kernel-server start
mount: nfsd is write-protected, mounting read-only
mount: cannot mount nfsd read-only
Exporting directories for NFS kernel daemon....
Starting NFS kernel daemon: nfsdrpc.nfsd: Unable to access /proc/fs/nfsd errno 2 (No such file or directory).
Please try, as root, 'mount -t nfsd nfsd /proc/fs/nfsd' and then restart rpc.nfsd to correct the problem
 failed!

# mount -t nfsd nfsd /proc/fs/nfsd
mount: nfsd is write-protected, mounting read-only
mount: cannot mount nfsd read-only

Is there some trick to enable this inside LXC or is it generally impossible because the nfs-kernel-server can not be shared with containers?

I would prefer to setup the NFS server within a LXC and not enable it in proxmox itself.

Thank you!
 
  • Like
Reactions: ldh
So did you have to install nfs-kernel-server on the host as well?

No, on the Proxmox host itself I didn't install anything NFS specific, the nfs-kernel-server is installed and running in the LXC container only:

On the Proxmox-Node (stuff was already installed with proxmox):
Code:
root@proxmox:~# dpkg -l | grep nfs
ii  libnfsidmap2:amd64             0.25-5                         amd64        NFS idmapping library
ii  nfs-common                     1:1.2.8-9                      amd64        NFS support files common to client and server

In the container:
Code:
root@nas:~# dpkg -l | grep nfs
ii  libnfsidmap2:amd64             0.25-5                    amd64        NFS idmapping library
ii  nfs-common                     1:1.2.8-9                 amd64        NFS support files common to client and server
ii  nfs-kernel-server              1:1.2.8-9                 amd64        support for NFS kernel server

Here's my Container-Config file /etc/pve/lxc/*.conf (a bit anonymized):
Code:
arch: amd64
cpulimit: 2
cpuunits: 1024
hostname: nas
memory: 512
net0: bridge=vmbr0,gw=192.168.12.3,hwaddr=xx:xx:xx:xx:xx:xx,ip=192.168.12.8/24,name=eth0,type=veth
onboot: 1
ostype: debian
rootfs: pve-container:subvol-108-disk-1,size=8G
swap: 2048
lxc.mount.entry: /tank/data srv/data none bind,create=dir,optional 0 0
lxc.aa_profile: unconfined
 
Hi,
i try to do the same like you. If I'm not wrong, you create mount from container to proxmox?
Do you know if mount from lxc to lxc possible and how it should looks like?
 
Hi,
i try to do the same like you. If I'm not wrong, you create mount from container to proxmox?
Do you know if mount from lxc to lxc possible and how it should looks like?

Or vice versa: the line "lxc.mount.entry: /tank/data srv/data none bind,create=dir,optional 0 0" does mount a directory from the node filesystem (/tank/data) into a directory of the container (/srv/data in the container filesystem).

I gues a mount from container to container is not possible (without NFS or SMB/CIFS) if the directory is not part of the node filesystem. That's the reason why I didn't use FreeNAS inside a VM and instead setup the ZFS pool in the proxmox node.
 
So

i've simly had to create a directory on node and add the to lines to container config to mount a from directory into the container?
The containers share a private network not including the proxmox node so a "classic nfs share" is not possible.
 
So

i've simly had to create a directory on node and add the to lines to container config to mount a from directory into the container?
The containers share a private network not including the proxmox node so a "classic nfs share" is not possible.

It's a bit confusing when you mix the topics "file system mounts between containers and/or node" and "network access to a directory".

For filesystem mounts, AFAIK it is *not* possible to mount a directory from one container to another. To do so, the directory has to be part of the node filesystem, then you can mount it in both containers by adding the "lxc.mount.entry" line. But when your filesystem exists only in a container/VM it is only possible to export the storage via a network protocol like NFS/SMB oder as iSCSI block device.

Networking of containers can be done in several ways (described this wiki page). The default configuration of Proxmox is that the primary network interface is setup in bridge mode. This is like a virtual switch in the node kernel that every container and/or VM (and the node itself) is connected to by it's virtual network device. Usually that means that a container has it's own IP address in the same network.

A "classic NFS share" is therefore possible the same way like on
machines with separate hardware: setup NFS server in one container and export a path, then you can (NFS-)mount that export in another container. Because a NFS mount is not a filesystem mount, this also works on completely separate filesystems in the containers and there's no need for the "lxc.mount.entry" line. But because Linux-NFS server is implemented in the kernel that is shared between all containers, to run the nfs-kernel-server you have add "lxc.aa_profile: unconfined" to tell AppArmor this is allowed (AppArmor is a security layer that forbids containers some potentially dangerous things).

 
  • Like
Reactions: sonofjon
+1 - this worked for me.

Would be nifty to have a few items like this added to the GUI options

that would be a grave security issue, and will never happen.

you should only set this option if you are aware of the consequences/implications - you pretty much remove most of the restrictions imposed on the container, which means that root from within can take over your host.
 
This particular option (especially when setting it to `unconfined`) would have to be restricted to root@pam anyway which is why we're somewhat reluctant to adding it to the GUI.
 
If you don't wish compromise security you can make a new apparmor profile and apply it to the container you want to host NFS shares on.

Create a new file "/etc/apparmor.d/lxc/lxc-default-with-nfsd" and paste in the following;

Code:
# Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles under /etc/apparmor.d/lxc

profile lxc-container-default-with-nfsd flags=(attach_disconnected,mediate_deleted) {
  #include <abstractions/lxc/container-base>

  # the container may never be allowed to mount devpts.  If it does, it
  # will remount the host's devpts.  We could allow it to do it with
  # the newinstance option (but, right now, we don't).
  deny mount fstype=devpts,
  mount fstype=nfsd,
  mount fstype=rpc_pipefs,
  mount fstype=cgroup -> /sys/fs/cgroup/**,
}

Then run this command to reload the profiles
Code:
apparmor_parser -r /etc/apparmor.d/lxc-containers

Finally at this line to your /etc/pve/lxc/CTID.conf
Code:
lxc.aa_profile = lxc-container-default-with-nfsd
 
Last edited:
I tried this extra aa_profile on a freshly installed 5.0 node without success. Also "lxc.aa_profile: unconfined" in the CT config does not work - which is working on my other 4.4 node.

What is the correct solution for a NFS server inside a LXC container on a 5.0 proxmox node?
 
I tried this extra aa_profile on a freshly installed 5.0 node without success. Also "lxc.aa_profile: unconfined" in the CT config does not work - which is working on my other 4.4 node.

What is the correct solution for a NFS server inside a LXC container on a 5.0 proxmox node?

Hi!

Code:
# cat /etc/pve/lxc/100.conf
arch: amd64
cores: 2
hostname: barc
memory: 2048
mp0: local-zfs:subvol-100-disk-2,mp=/var/barc,replicate=0,size=5000G
nameserver: 192.168.0.201 8.8.8.8
net0: name=eth0,bridge=vmbr0,gw=192.168.0.254,hwaddr=EA:7A:87:D7:3D:D4,ip=192.168.0.222/24,type=veth
net1: name=eth1,bridge=vmbr2,hwaddr=EA:C0:34:25:B3:AA,ip=192.168.110.61/27,type=veth
onboot: 1
ostype: debian
rootfs: local-zfs:subvol-100-disk-1,size=8G
searchdomain: mfczgo.ru
swap: 512
lxc.aa_profile: unconfined
# pveversion
pve-manager/5.0-29/6f01516 (running kernel: 4.10.17-1-pve)

It works for me. :)

on CT:

Code:
# systemctl status nfs-kernel-server.service
* nfs-server.service - NFS server and services
   Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
   Active: active (exited) since Wed 2017-08-09 10:09:49 UTC; 7min ago
  Process: 10679 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
  Process: 10678 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
  Process: 10677 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS)
  Process: 10687 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
  Process: 10686 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
 Main PID: 10687 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   CGroup: /system.slice/nfs-server.service

Aug 09 10:09:49 barc systemd[1]: Starting NFS server and services...
Aug 09 10:09:49 barc systemd[1]: Started NFS server and services.

# showmount
Hosts on barc:
192.168.110.17
192.168.110.18
192.168.110.19
192.168.110.41
192.168.110.42
192.168.110.43
192.168.110.44

Best regards,
Gosha
 
Last edited:
Thank you for your quick reply!

I added the "lxc.aa_profile: unconfined" line and upgraded the Debian inside the CT from jessie to stretch.
I shutdown the CT, start again and I still get the error:


Code:
Linux nas003 4.10.17-1-pve #1 SMP PVE 4.10.17-18 (Fri, 28 Jul 2017 14:09:00 +0200) x86_64

root@nas003:~# service nfs-kernel-server restart
[ ok ] Stopping NFS kernel daemon: mountd nfsd.
[ ok ] Unexporting directories for NFS kernel daemon....
[warn] Not starting NFS kernel daemon: no support in current kernel. ... (warning).

Code:
# cat /etc/pve/lxc/116.conf
#Debian mit NFS Share
arch: amd64
cores: 2
cpulimit: 2
hostname: nas003
memory: 256
net0: name=eth0,bridge=vmbr0,gw=xxxxxxxx,hwaddr=xxxxxxx:6E,ip=xxxxxxxx,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-116-disk-1,size=300G
swap: 256
lxc.aa_profile: unconfined

Code:
# pveversion
pve-manager/5.0-30/5ab26bc (running kernel: 4.10.17-1-pve)
 
Last edited:
I tried again on the 5.0 node:

I created a new CT with a Debian Jessie turnkey template.
I added the "lxc.aa_profile: unconfined".
I started the CT.
I installed "nfs-kernel-server" + created /etc/exports = resulting in the same error

# service nfs-kernel-server start
[warn] Not starting NFS kernel daemon: no support in current kernel. ... (warning).

Thank you for any advice!
Falco
 
Thank you for your quick reply!

I added the "lxc.aa_profile: unconfined" line and upgraded the Debian inside the CT from jessie to stretch.
I shutdown the CT, start again and I still get the error:

I installed CT from this template:

pic_2.png

Best regards,
Gosha
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!