NFS inside OpenVZ container?

dlasher

Renowned Member
Mar 23, 2011
229
22
83
Not having much luck getting NFS server running inside an OpenVZ container (ubuntu 11.10 64-bit).. I've googled a bit, and there's discussion about problems with NFSv4 client/server inside a container.

installing/starting nfs-common hangs.. indefinately.

Any secrets to success?
 
First one important information: NFSv4 is not supported in OpenVZ (mount.nfs4: Protocol not supported). More info here: http://forum.openvz.org/index.php?t=msg&goto=46174&. NFSv3 is notorious for leaving hanging file locks and in my opinion NFSv3 should not be used in file intensive setups.

To activate NFS inside a container issue the command below:
Code:
vzctl set $CTID --feature nfsd:on --save
Remember to enter yes in the container in this file /etc/default/nfs-common
# Do you want to start the statd daemon? It is not needed for NFSv4.
NEED_STATD=yes


On the PVE host you need to ensure modules nfs and nfsd is loaded:
modprobe nfsd
modprobe nfs

After all this the OpenVZ container must be restarted.

sudo mount -vt nfs nfs1:/exports/distro /opt/tmp
mount.nfs: timeout set for Fri May 4 00:28:56 2012
mount.nfs: trying text-based options 'vers=4,addr=192.168.2.202,clientaddr=192.168.2.79'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'addr=192.168.2.202'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.2.202 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.2.202 prog 100005 vers 3 prot UDP port 41964

As can be read NFSv4 is tried but fails so NFSv3 is used instead.

If NFS is important I would use a VM instead which is capable of delivering NFSv4 which have proper support for file locks and wait until OpenVZ supports NFSv4.
 
Last edited:
Thank you that's a start.. As a background, I'm using VM's, trying to migrate as many as I can to CT's. In one case the machine I'm using doesn't support VM's (older opteron) so CT's are the only option.


In the two cases I'm looking at using NFS with openVZ:

1. NFS server for PXE booted hosts
2. NFS server on host, NFS client on Containers

in case (1) is it better to run the NFS server on the host rather than a container?
in case (2) am I better off mount/map directories from the host rather than using NFS? (syntax/examples needed if that's even possible)
 
btw, installing/starting nfs-common (unbuntu 11.10) still hangs forever, after following the directions:

.....
Setting up nfs-common (1:1.2.2-4ubuntu5.1) ...
<hangs here forever>
 
1. NFS server for PXE booted hosts
2. NFS server on host, NFS client on Containers

in case (1) is it better to run the NFS server on the host rather than a container?
A container here is perfectly fine since PXE only involves reading. I have a PXE boot server running in a container myself.
in case (2) am I better off mount/map directories from the host rather than using NFS? (syntax/examples needed if that's even possible)
Again, if it is more or less only reading go for a container. Mounting between host and container is a bad thing since this pins your container to the specific host - no migration.
 
btw, installing/starting nfs-common (unbuntu 11.10) still hangs forever, after following the directions:

.....
Setting up nfs-common (1:1.2.2-4ubuntu5.1) ...
<hangs here forever>
The debian-6.0-standard template available for download works perfectly so try this on instead.
 
A container here is perfectly fine since PXE only involves reading. I have a PXE boot server running in a container myself.
Again, if it is more or less only reading go for a container. Mounting between host and container is a bad thing since this pins your container to the specific host - no migration.

Sorry, should have provided more information.. scenario #2 is heavy on disk IO... and since their setup is a single box, pinning to the specific host isn't a bad thing. Assuming I wanted to go down that road, what's the config look like? (They have a raid array mounted as "/raid" on the host, and in the existing VM config, the VM mounts the NFS export from the underlying host)
 
Sorry, should have provided more information.. scenario #2 is heavy on disk IO... and since their setup is a single box, pinning to the specific host isn't a bad thing. Assuming I wanted to go down that road, what's the config look like? (They have a raid array mounted as "/raid" on the host, and in the existing VM config, the VM mounts the NFS export from the underlying host)
You need to be more specific. The best way would be if you describe how you would make the configuration and then we can use that as a starting point.
 
btw, installing/starting nfs-common (unbuntu 11.10) still hangs forever, after following the directions:

.....
Setting up nfs-common (1:1.2.2-4ubuntu5.1) ...
<hangs here forever>

Probably statd is hanging. Try restarting it with /etc/init.d/statd stop (and start). You'll see that nfs-common installs fine.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!