[SOLVED] NFS server in LXC

generalproxuser

Active Member
Mar 14, 2021
107
34
33
44
I have been researching and attempting this for a while now. Maybe it's impossible but I haven't given up completely.

What I want to accomplish is to create an LXC that does two things...NFS and TFTP server. I have a ZFS pool and a directory that successfully gets mounted inside my container and has rwx permissions.

I have only been testing on debian11 ct template but I can go back to 10 or 9 (really would like to stay current) if it is needed.

I want to do this with an unprivileged container and an apparmor profile (found a "default with nfsd" profile but it was back in proxmox5).

The closest I have gotten so far is the container has access to my ZFS pool directory and can r/w files to it. I cannot get the nfs-kernel-server to start. It keeps failing with "A dependency job for nfs-server.service failed" error.

Any ideas welcome. I feel I am close but not sure where to go next. Thanks.
 
  • Like
Reactions: duganth
I got it to work.

It requires a privileged container with an apparmor profile.

Also, the nfs container I got working is my tftp server for my raspberry pi network boots and smb shares. Really the only thing I use nfs for is the root filesystems for the network boot devices.

Compared to my old setup (omv on odroid hc2) the container is lot easier and faster for me to configure (from command line).
 
Last edited:
@moxmox I have to dig up my notes. It's been working and not giving me issues so I haven't revisited it in some time other than to update whatever needs it.

I will say off the top of my head that I had to create the apparmor profile needed for a privileged container to be able to use nfs services. I was able to get the information from the depths of this forum and other web searches.
 
@generalproxuser

yes I have tried some app amour settings but not managed to find the right combination. Yes if you can find it would be much appreciated, thanks!
 
This will be a bit long and is mainly how to get the container started. User and group permissions will depend on your environment which I cannot anticipate for. This might not even be entirely correct or secure so use at your own risk. This is how I got it to work for me.

Creating the apparmor profile from the proxmox host terminal (ssh or console shell):
Code:
sudo touch /etc/apparmor.d/lxc/lxc-default-with-nfsd

sudo nano /etc/apparmor.d/lxc/lxc-default-with-nfsd

Add the following contents and save the file:
Code:
# Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles under /etc/apparmor.d/lxc

profile lxc-container-default-with-nfsd flags=(attach_disconnected,mediate_deleted) {
  #include <abstractions/lxc/container-base>

  # the container may never be allowed to mount devpts.  If it does, it
  # will remount the host's devpts.  We could allow it to do it with
  # the newinstance option (but, right now, we don't).
  deny mount fstype=devpts,
  mount fstype=nfsd,
  mount fstype=rpc_pipefs,
  mount fstype=cgroup -> /sys/fs/cgroup/**,
}

Next file to edit:
Code:
sudo nano /etc/apparmor.d/lxc/lxc-default
add to the end of the file and save:
Code:
mount fstype=nfs*,

Next file to edit:
Code:
sudo nano /etc/apparmor.d/lxc/lxc-default-with-mounting
add to the end of the file and save:
Code:
mount fstype=nfs*,

Next copy one file to another:
Code:
sudo cp -i /etc/apparmor.d/lxc/lxc-default-cgns /etc/apparmor.d/lxc/lxc-default-with-nfs

Edit the file:
Code:
sudo nano /etc/apparmor.d/lxc/lxc-default-with-nfs
Change the name in file from "default-cgns" to "default-with-nfs"
and add to end of file and save:
Code:
mount fstype=nfs*,
mount fstype=rpc_pipefs,

I forgot what this does but apparently it's needed:
Code:
sudo apparmor_parser -r /etc/apparmor.d/lxc-containers
Create a debian container as privileged > don't start after creation

Edit the container files:
Code:
sudo nano /etc/pve/nodes/nodeName/lxc/vmid.conf
- nodeName = name of your proxmox node
- vmid.conf = number of your ct container

add to end of file and save:
Code:
lxc.apparmor.profile = lxc-container-default-with-nfsd

Another file to edit:
Code:
sudo nano /var/lib/lxc/vmid/config
add to end of file and save:
Code:
lxc.apparmor.profile = lxc-container-default-with-nfsd

Another file to edit:
Code:
sudo nano /etc/pve/nodes/nodeName/lxc/vmid.conf
add your nfs directory mount point to the ct and save the file:
Code:
mp0: /directory/path/in/host,mp=/directory/mount/in/ct

start container and connect to shell and update the ct
Code:
apt update
apt upgrade -y
apt install nfs-kernel-server -y
--------------------------------------------------------------------
From there you can setup your nfs exports and user/group permissions per your environment.
 
thanks very much for writing this all out - much appreciated. will try it out tomorrow.
 
I forgot to add:

Inside the file:
Code:
/etc/pve/nodes/nodeName/lxc/vmid.conf

You also want to make sure you have the line:
Code:
features: nesting=1

So you would be adding three items total to that file.
 
  • Like
Reactions: endurance
How would a container run a kernel module (aka nfs-kernel-server)?

You may have better luck running a user space NFS server like Ganesha-NFS in the container.
I like this idea but am having a difficult time getting it working. I keep getting "clnt_create: RPC: Program not registered" . Do you have any tips on how to get this working on an ubuntu container?
 
Thanks from me too, had found a blog beginning after the container creation, I missed the nesting option
 
@Grunchy permissions will always be the biggest headache when you are starting out.

I'm assuming you have the users/groups existing inside the container since your smb setup is specifying groups to access the share. The users don't need a home directory or a login shell. They just need a samba password. NFS shouldn't give you too much trouble since it doesn't enforce user credentials by default. If you can't access the NFS share to write to it then its permissions need to be reviewed.

I usually set the root directory on the proxmox host to nobody:nogroup and then chmod 0774 (no -R) the root directory. From there I start managing the folder permissions inside the container (ie creating new directories from inside the container).
 
After reading this thread, I wonder if it's worth the hassle to configure an LXC container to share ZFS datasets from the host via Samba and NFS rather than sharing from the host directly?
 
After reading this thread, I wonder if it's worth the hassle to configure an LXC container to share ZFS datasets from the host via Samba and NFS rather than sharing from the host directly?
+1
I just went through the LXC permission journey and end up with the same question
 
I am still using an lxc to share folders on the host via nfs/tftp/smb. I am sharing the host storage to other devices on my network and not necessarily exclusively to vms/cts. I bind mount host directories to cts.

Since my server has all the ssds for my storage I didn't want to create multiple users on the host and clutter up the pve host environment. Creating the users inside another vm/ct with ldap/idm/nfs/tftp/smb services just allowed me to allow users to write to the pve host storage (zfs pool) without those users existing directly on the host.

I also use virtiofsd to share host directories directly to vms.
 
  • Like
Reactions: Alecz
Thanks very much for this detailed guide. I'm getting a lot of errors in my journal similar to

AVC apparmor="DENIED" operation="mount" class="mount" info="failed flags match" error=-13 profile="lxc-container-default-with-nfsd" name="/" pid=3439 comm="(stunnel4)" flags="rw, rslave"

Any idea what the cause could be?
 
The steps here to get it to work should be considered obsolete; you can just use a debian 12 ct and enable the nesting and nfs features of the ct then install nfs-kernel-server as usual as well as your other services. I had re-done my ct a while back with that method (debian 12 ct, nesting, nfs on proxmox 8) and I did not have to do any fiddling with other files.
 
  • Like
Reactions: Dubyah and planten
The steps here to get it to work should be considered obsolete; you can just use a debian 12 ct and enable the nesting and nfs features of the ct then install nfs-kernel-server as usual as well as your other services. I had re-done my ct a while back with that method (debian 12 ct, nesting, nfs on proxmox 8) and I did not have to do any fiddling with other files.
I'm just now struggling with this thing also.
Saw your post but, if you want to activate NFS of SMB, your ct has to be privileged. No other way to get one of these options activated. Unfortunately, so I guess I'll be looking for a nice tutorial to setup a Samba/NFS VM
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!