NFS inside OpenVZ container?

Discussion in 'Proxmox VE: Installation and configuration' started by dlasher, May 4, 2012.

  1. dlasher

    dlasher Member

    Joined:
    Mar 23, 2011
    Messages:
    107
    Likes Received:
    5
    Not having much luck getting NFS server running inside an OpenVZ container (ubuntu 11.10 64-bit).. I've googled a bit, and there's discussion about problems with NFSv4 client/server inside a container.

    installing/starting nfs-common hangs.. indefinately.

    Any secrets to success?
     
  2. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,481
    Likes Received:
    96
    First one important information: NFSv4 is not supported in OpenVZ (mount.nfs4: Protocol not supported). More info here: http://forum.openvz.org/index.php?t=msg&goto=46174&. NFSv3 is notorious for leaving hanging file locks and in my opinion NFSv3 should not be used in file intensive setups.

    To activate NFS inside a container issue the command below:
    Code:
    vzctl set $CTID --feature nfsd:on --save
    Remember to enter yes in the container in this file /etc/default/nfs-common
    # Do you want to start the statd daemon? It is not needed for NFSv4.
    NEED_STATD=yes


    On the PVE host you need to ensure modules nfs and nfsd is loaded:
    modprobe nfsd
    modprobe nfs

    After all this the OpenVZ container must be restarted.

    sudo mount -vt nfs nfs1:/exports/distro /opt/tmp
    mount.nfs: timeout set for Fri May 4 00:28:56 2012
    mount.nfs: trying text-based options 'vers=4,addr=192.168.2.202,clientaddr=192.168.2.79'
    mount.nfs: mount(2): Protocol not supported
    mount.nfs: trying text-based options 'addr=192.168.2.202'
    mount.nfs: prog 100003, trying vers=3, prot=6
    mount.nfs: trying 192.168.2.202 prog 100003 vers 3 prot TCP port 2049
    mount.nfs: prog 100005, trying vers=3, prot=17
    mount.nfs: trying 192.168.2.202 prog 100005 vers 3 prot UDP port 41964

    As can be read NFSv4 is tried but fails so NFSv3 is used instead.

    If NFS is important I would use a VM instead which is capable of delivering NFSv4 which have proper support for file locks and wait until OpenVZ supports NFSv4.
     
    #2 mir, May 4, 2012
    Last edited: May 4, 2012
  3. dlasher

    dlasher Member

    Joined:
    Mar 23, 2011
    Messages:
    107
    Likes Received:
    5
    Thank you that's a start.. As a background, I'm using VM's, trying to migrate as many as I can to CT's. In one case the machine I'm using doesn't support VM's (older opteron) so CT's are the only option.


    In the two cases I'm looking at using NFS with openVZ:

    1. NFS server for PXE booted hosts
    2. NFS server on host, NFS client on Containers

    in case (1) is it better to run the NFS server on the host rather than a container?
    in case (2) am I better off mount/map directories from the host rather than using NFS? (syntax/examples needed if that's even possible)
     
  4. dlasher

    dlasher Member

    Joined:
    Mar 23, 2011
    Messages:
    107
    Likes Received:
    5
    btw, installing/starting nfs-common (unbuntu 11.10) still hangs forever, after following the directions:

    .....
    Setting up nfs-common (1:1.2.2-4ubuntu5.1) ...
    <hangs here forever>
     
  5. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,481
    Likes Received:
    96
    A container here is perfectly fine since PXE only involves reading. I have a PXE boot server running in a container myself.
    Again, if it is more or less only reading go for a container. Mounting between host and container is a bad thing since this pins your container to the specific host - no migration.
     
  6. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,481
    Likes Received:
    96
    The debian-6.0-standard template available for download works perfectly so try this on instead.
     
  7. dlasher

    dlasher Member

    Joined:
    Mar 23, 2011
    Messages:
    107
    Likes Received:
    5
    Sorry, should have provided more information.. scenario #2 is heavy on disk IO... and since their setup is a single box, pinning to the specific host isn't a bad thing. Assuming I wanted to go down that road, what's the config look like? (They have a raid array mounted as "/raid" on the host, and in the existing VM config, the VM mounts the NFS export from the underlying host)
     
  8. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,481
    Likes Received:
    96
    You need to be more specific. The best way would be if you describe how you would make the configuration and then we can use that as a starting point.
     
  9. edewolf

    edewolf New Member

    Joined:
    May 25, 2012
    Messages:
    1
    Likes Received:
    0
    Probably statd is hanging. Try restarting it with /etc/init.d/statd stop (and start). You'll see that nfs-common installs fine.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice