Is it possible to run a NFS server within a LXC?

The "profile lxc-container-default-with-nfsd" solution also works now with the Debian 9.02 template.

For the record:
Upgrading a Debian 8 LXC container to Debian 9 does not help as far as I tried.

I think that will lead to big problems for everybody who is upgrading from Proxmox 4.4 to 5.0 while using NFS shares served by "old" Debian 8 in LXC containers. They will have to recreate their NFS containers from scratch.
 
This thread was instrumental to getting a FreeIPA install up and running. However autofs wasn't working so the apparmor config needed a tweak.

Also add

Code:
  ...
  mount fstype=autofs,
  mount options=(rw, bind, ro),

Thanks Proxmox!
 
Just a note about the changes needed to my NFS container after upgrading to Proxmox 5:

After the upgrade of Proxmox, the NFS server inside the container didn't run anymore. I needed to upgrade the container to Debian 9 and start kernel-nfs-server via systemctl, that solved the start problems.

Another problem that appeared after the upgrade on the client side was that documents stored on the server were locked, i.e. not changeable by LibreOffice and other office packages. To solve this, I had to set the NFS mount option "local_lock=all" on the clients. Using autofs, this option can simply be appended to the line starting with "/net" in the /etc/autofs/auto.master file.

(I'm not sure, if the lock-problem is really caused by the Proxmox upgrade, could also be a coincidence caused by other changes).
 
If you don't wish compromise security you can make a new apparmor profile and apply it to the container you want to host NFS shares on.

Create a new file "/etc/apparmor.d/lxc/lxc-default-with-nfsd" and paste in the following;

Code:
# Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles under /etc/apparmor.d/lxc

profile lxc-container-default-with-nfsd flags=(attach_disconnected,mediate_deleted) {
  #include <abstractions/lxc/container-base>

  # the container may never be allowed to mount devpts.  If it does, it
  # will remount the host's devpts.  We could allow it to do it with
  # the newinstance option (but, right now, we don't).
  deny mount fstype=devpts,
  mount fstype=nfsd,
  mount fstype=rpc_pipefs,
  mount fstype=cgroup -> /sys/fs/cgroup/**,
}

Then run this command to reload the profiles
Code:
apparmor_parser -r /etc/apparmor.d/lxc-containers

Finally at this line to your /etc/pve/lxc/CTID.conf
Code:
lxc.aa_profile = lxc-container-default-with-nfsd

Hello,

While this is a old thread, it retains high ranking with search engines, and this post is still applicable with Proxmox 5.3-1. Caveat here is that lxc.aa_profile is depreciated, use `lxc.apparmor.profile` instead in your container configuration.

Regards
 
  • Like
Reactions: s.lendl
adding lxc.apparmor.profile: unconfined enabled nfs for me on 5.3
 
adding lxc.apparmor.profile: unconfined enabled nfs for me on 5.3

Which also disabled AppArmor for your container. It is a better option to use the "NFS" or "Nesting" Features.
 
  • Like
Reactions: jimnordb
Should the "NFS" and/or "Nesting" features be all that is required to change in order to run nfs-kernel-server in an LXC container? I have enabled all of these options in 5.3 and also toggled "unprivileged" and no combination of settings results in the NFS server starting successfully. Each time it fails with a dependency error and journalctl shows it is related to rpc_pipefs.
 
  • Like
Reactions: steph b
Should the "NFS" and/or "Nesting" features be all that is required to change in order to run nfs-kernel-server in an LXC container? I have enabled all of these options in 5.3 and also toggled "unprivileged" and no combination of settings results in the NFS server starting successfully

NFS option is only for privileged containers. You can try using a userspace NFS server (like ganesha) or use a privileged container.
 
Right...what I was trying to say is that the nfs-kernel-server will not start in a privileged container for me on 5.3 (unless I have a bad assumption about how to enable that). Is this possible only when the container is created or is it controlled using the unprivileged option in the config? Would I also need to completely disable apparmour? I thought I had read the if nothing is specified, privileged is still the default? Thanks for your help.
 
The nfs server is kernel-side, unprivileged containers won't have any more control over that than privileged containers. IMO it's generally not all that useful to move something which runs in the kernel anyway into a container. There's no option we provide which would "just enable" it. Better use a VM or a user-space nfs server implementation.
 
Hello, I'm using proxmox 6.0
pve-manager/6.0-4/2a719255 (running kernel: 5.0.15-1-pve

I've tried to add a config named /etc/apparmor.d/lxc/lxc-default-with-nfsd
with this content:
Code:
profile lxc-container-default-with-nfsd flags=(attach_disconnected,mediate_deleted) {
  deny mount fstype=devpts,

mount fstype=rpc_pipefs,
mount fstype=nfs,

  mount fstype=autofs,
  mount options=(rw, bind, ro),

  mount fstype=nfsd,
  mount fstype=rpc_pipefs,
  mount fstype=cgroup -> /sys/fs/cgroup/**,
}
Then reloaded the profiles with this command:
Code:
apparmor_parser -r /etc/apparmor.d/lxc-containers

Then added this line at the end of my ct config /etc/pve/lxc/103.conf:
Code:
lxc.apparmor.profile = lxc-container-default-with-nfsd

now it looks like this:
Code:
arch: amd64
cores: 2
hostname: FOGSERVER
memory: 2048
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=FE:F4:D6:14:A1:5A,ip=dhcp,type=veth
ostype: ubuntu
rootfs: local-zfs:subvol-103-disk-0,size=500G
swap: 2048
unprivileged: 1
lxc.apparmor.profile = lxc-container-default-with-nfsd

CT is ubuntu 16.04, i've restarted it after all changes applied in proxmox shell, and after all, i'm still getting this error, when trying to install nfs-kernel-server:
Code:
A dependency job for nfs-server.service failed. See 'journalctl -xe' for details.
nfs-server.service couldn't start.
A dependency job for nfs-server.service failed. See 'journalctl -xe' for details.
invoke-rc.d: initscript nfs-kernel-server, action "start" failed.
● nfs-server.service - NFS server and services
   Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
   Active: inactive (dead)

авг 30 12:20:33 FOGSERVER systemd[1]: Dependency failed for NFS server and services.
авг 30 12:20:33 FOGSERVER systemd[1]: nfs-server.service: Job nfs-server.service/start failed with result 'dependency'.
авг 30 12:20:33 FOGSERVER systemd[1]: Dependency failed for NFS server and services.
авг 30 12:20:33 FOGSERVER systemd[1]: nfs-server.service: Job nfs-server.service/start failed with result 'dependency'.
авг 30 12:27:58 FOGSERVER systemd[1]: Dependency failed for NFS server and services.
авг 30 12:27:58 FOGSERVER systemd[1]: nfs-server.service: Job nfs-server.service/start failed with result 'dependency'.
авг 30 12:27:58 FOGSERVER systemd[1]: Dependency failed for NFS server and services.
авг 30 12:27:58 FOGSERVER systemd[1]: nfs-server.service: Job nfs-server.service/start failed with result 'dependency'.

Seems it's not doing it's job:
Code:
systemctl list-dependencies nfs-kernel-server
● ├─proc-fs-nfsd.mount


Code:
авг 30 12:44:01 FOGSERVER systemd[1]: run-rpc_pipefs.mount: Mount process exited, code=exited status=32
авг 30 12:44:01 FOGSERVER mount[1381]: mount: доступ запрещён
авг 30 12:44:01 FOGSERVER systemd[1]: Failed to mount RPC Pipe File System.
-- Subject: Ошибка юнита run-rpc_pipefs.mount
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Произошел сбой юнита run-rpc_pipefs.mount.
-- 
-- Результат: failed.
авг 30 12:44:01 FOGSERVER systemd[1]: Dependency failed for RPC security service for NFS server.
-- Subject: Ошибка юнита rpc-svcgssd.service
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Произошел сбой юнита rpc-svcgssd.service.
-- 
-- Результат: dependency.
авг 30 12:44:01 FOGSERVER systemd[1]: rpc-svcgssd.service: Job rpc-svcgssd.service/start failed with result 'dependency'.
авг 30 12:44:01 FOGSERVER systemd[1]: run-rpc_pipefs.mount: Unit entered failed state.
 
Last edited:
Hi,

I'm using Proxmox 6 and it do not work as kas1m.

I going to move my nfs server to the main system.


after adding all kinds of new rulse in my apparmor profile, I can still this this ERROR in my log:


Apr 13 00:06:35 server kernel: [ 2346.365434] audit: type=1400 audit(1586725595.138:55): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="/usr/bin/lxc-start" name="/proc/sys/kernel/random/boot_id" pid=18507 comm="lxc-start" srcname="/dev/.lxc-boot-id" flags="rw, bind"

I think it's not related, but it's not looks clean.
 
I'm a bit surprised this is not described in the Proxmox Wiki. It is such a fundamental functionality, running an NFS server in a LXC container.
 
  • Like
Reactions: dtibi and steph b
Everything is about running NFS client, nothing about NFS server.

I finally running an NFS and a samba directly in my Proxmox server, not as clean as I would like, but it works...
 
Yes, it seems it's not possible, so in the end I decided to run the NFS server directly on the host. I then mount the NFS share in a VM and share that with Samba and Webdav. Not perfect, but the closest I could get.
 
How is this not resolved it? Seems quite ridiculous!

I can't even mount NFS on the host. Can apparmor be disable permanently without breaking other things?
 
For the mounting of an NFS share, it's different, you can by checking the nfs option in proxmox GUI. (not tried, but it should be easy)
 
I have nfs-kernel-server running in a Debian 10 LXC container on PVE 6:
  1. Create a privileged container by unchecking "Unprivileged" during creation. May be possible to convert an existing container from unprivileged to privileged by backing-up and restoring.
  2. In the container Options -> Features, enable Nesting. (The NFS feature doesn't seem necessary for running an NFS server. May be required for an NFS client - I haven't checked.)
These two steps will partially compromise host-container isolation. In security terms it's probably not much different from running the NFS server directly on the PVE host. But for me using the container is preferred as it gives me HA, backup and restore capabilities.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!