Q: cleanest way to get NFS Client / NFS Mount access from within an OpenVZ Container

fortechitsolutions

Renowned Member
Jun 4, 2008
434
46
93
Hi all,

I've been digging a bit on this and I'm curious what the 'best practice' is (if any?) recommended to facilitate access for an OpenVZ Container based VM, to mount / access data on an NFS server.

I have manually followed steps as per hints at the URL, http://kb.parallels.com/en/873 , which are specifically for Parallels (commercial version of openVZ) but the scripts to automate the process during start and stop of a VM - don't appear to be consistent / the same in the expected location on a ProxVE OpenVZ host.

I'm curious if anyone has suggestions on what the easiest / cleanest way to get NFS client access within an OpenVZ container.

ie,

- approach as per the link above - to mount the NFS share directly on the ProxVE host; then use the mount command with --bind option to make this accessible within a given OpenVZ container; and to try to cook some scripts to automate this process ?

- some other approach (I've already tried tweaking vz parameters for the vm, with command as shown below - but this didn-t work):

106 vzctl set 102 --features "nfs:eek:n" --save

- any other suggestions or thoughts ?

Ideally, a solution that doesn't require significant manual intervention to make NFS mounts accessible each time a new VM is created -- would be ideal. But I'm not sure if that is possible...

Thanks!


Tim
 
Re: Q: cleanest way to get NFS Client / NFS Mount access from within an OpenVZ Contai

OK. I've figured out how to do this, so I'm posting my notes here for 'the benefit of others'.

This is based on the Help Doc visible at the URL, http://kb.parallels.com/en/873
but with some adjustments for ProxVE environment.


---------------------
CONFIGURE an NFS mount via Proxmox web GUI: pretend it is an ISO repo.

In my tests here I'm using an "OpenFiler" box as the NFS server. The NFS
mount in question is mounted to, "/mnt/pve/netflow-data". (Note that my
other NFS mount is another Openfiler box, for storing backups of VMs from
this ProxVE host.)

Then over on an SSH / Command line session on the Proxmox VE host,
for reference, we see:

Code:
proxve:/etc/vz/conf# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/pve/root         99083868    829252  93221452   1% /
tmpfs                  2019724         0   2019724   0% /lib/init/rw
udev                     10240      2740      7500  27% /dev
tmpfs                  2019724         4   2019720   1% /dev/shm
/dev/sda1               516040     31828    458000   7% /boot
/dev/mapper/pve-data 2777130208   1470796 2775659412   1% /var/lib/vz
10.255.255.234:/mnt/dataraid/backup/proxve/
                     1913556480    528896 1815824384   1% /mnt/pve/openfiler-backups
10.255.255.233:/mnt/dataraid/storage/flowdata
                      99083868    829252  93221452   1% /mnt/pve/netflow-data



THEN, setup the config dir:

Code:
proxve:/etc/vz/conf# pwd
/etc/vz/conf 

proxve:/etc/vz/conf# ls -la
total 44
drwxr-xr-x 2 root root 4096 May  9 17:30 .
drwxr-xr-x 6 root root 4096 May  5 09:49 ..
-rw-r--r-- 1 root root  252 Jan 28 06:36 0.conf
-rw-r--r-- 1 root root 1396 May  6 11:40 101.conf
-rw-r--r-- 1 root root 1471 May  6 11:41 102.conf
-rwxr-xr-x 1 root root  669 May  9 17:30 102.mount
-rwxr-xr-x 1 root root  614 May  9 17:27 102.umount
-rw-r--r-- 1 root root 1550 Jan 28 06:36 ve-basic.conf-sample
-rw-r--r-- 1 root root 1585 Jan 28 06:36 ve-light.conf-sample
-rw-r--r-- 1 root root 1221 Jan 20 05:17 ve-pve.auto.conf-sample
-rw-r--r-- 1 root root 1606 Jan 28 06:36 ve-unlimited.conf-sample
proxve:/etc/vz/conf#

SPECIFICALLY: I create a 102.mount and 102.umount file, with content as shown below:

(**NOTE** That you must chmod +x each of these files to render them executable;
otherwise things don't work. This little detail wasn't mentioned on the Virtuozzo
support article; but was discussed in an OpenVZ forums thread...)


Code:
proxve:/etc/vz/conf# more 102.mount 
#!/bin/sh
#########################################
#
# Setup as per URL,
#
# http://kb.parallels.com/en/873
#
# but with modifications as per ProxVE OpenVZ environment....
# in order to facilitate mounting a local (NFS mounted) filesystem within a OpenVZ VM
#
# TDC May-2010
#########################################

# Check if global OpenVZ configuration and container configuration files exist
[ -f /etc/vz/vz.conf ] || exit 1
[ -f $VE_CONFFILE ] || exit 1

# Source both files. Note the order, it is important
source /etc/vz/vz.conf
source $VE_CONFFILE

mkdir -p $VE_ROOT/mnt/netflow-data && mount --bind /mnt/pve/netflow-data/102 $VE_ROOT/mnt/netflow-data
exit 0


---------------------------------------and the other file ..... ----------------------



proxve:/etc/vz/conf# more 102.umount 
#!/bin/sh
#########################################
#
# Setup as per URL,
#
# http://kb.parallels.com/en/873
#
# but with modifications as per ProxVE OpenVZ environment....
# in order to facilitate UN_mounting a local (NFS mounted) filesystem within a OpenVZ VM
#
# TDC May-2010
#########################################
#!/bin/sh

# Check if global OpenVZ configuration and container configuration files exist
[ -f /etc/vz/vz.conf ] || exit 1
[ -f $VE_CONFFILE ] || exit 1

# Source both files. Note the order, it is important
source /etc/vz/vz.conf
source $VE_CONFFILE

umount $VE_ROOT/mnt/netflow-data/

exit 0
proxve:/etc/vz/conf#


SOME NOTES:

--> I am specifically mounting a sub-dir off the NFS because I don't want to expose the full NFS
mount to the specified (VM=102) OpenVZ VM. Rather, I want to mount only a sub-dir called "102"

--> This does work. HOWEVER: When stopping the VM, I get an error message, as follows:

Code:
proxve:/etc/vz/conf# vzctl stop 102
Stopping container ...
Container was stopped
umount: /mnt/pve/netflow-data/102: not mounted
umount: /mnt/pve/netflow-data/102: not mounted
Container is unmounted
proxve:/etc/vz/conf#


HOWEVER: This error doesn't appear to be a 'problem'. And since I would rather have things work 'as I want' but with an apparently irrelevant error, instead of 'not as I want but without an error' then I made my pick. Your miles may vary. Note that the error doesn't appear visible / nor have any apparent impact when starting/stopping the OpenVZ VM via the ProxVE web admin gui, which is where you normally would be for that sort of thing, I suspect.

--> I did specifically test the alternate version, ie, tweaking both the .mount and .umount scripts so that they both refer explicitly to the 'real NFS mount path' rather than a sub-dir of my choice. In this
adjusted case, there is no error message returned when the VM is stopped.


Just for reference:

-- Once I start the VM, command line, this is what I see:

Code:
proxve:/etc/vz/conf# vzctl start 102
Starting container ...
Container is mounted
Adding IP address(es): 10.255.255.199
Setting CPU units: 1000
Setting CPUs: 1
Configure meminfo: 262144
Set hostname: data1.domainname.com
File resolv.conf was modified
Setting quota ugidlimit: 0
Container start in progress...

proxve:/etc/vz/conf# vzctl enter 102
entered into CT 102

[root@data1 /]# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/simfs             8388608    544168   7844440   7% /
/dev/mapper/pve-root  99083868    829264  93221440   1% /mnt/netflow-data
none                    524288         4    524284   1% /dev
[root@data1 /]# 

[root@data1 /]# cd /mnt/netflow-data/

[root@data1 netflow-data]# touch testfile

[root@data1 netflow-data]# ls -la
total 8
drwxr-xr-x 2 root root 4096 May  9 17:37 .
drwxr-xr-x 4 root root 4096 May  9 17:18 ..
-rw-r--r-- 1 root root    0 May  9 17:37 testfile
[root@data1 netflow-data]#


and when I exit the VM, this is what is apparent on the 'real' NFS mount:

Code:
[root@data1 netflow-data]# exit
logout

exited from CT 102

proxve:/etc/vz/conf#  df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/pve/root         99083868    829264  93221440   1% /
tmpfs                  2019724         0   2019724   0% /lib/init/rw
udev                     10240      2740      7500  27% /dev
tmpfs                  2019724         4   2019720   1% /dev/shm
/dev/sda1               516040     31828    458000   7% /boot
/dev/mapper/pve-data 2777130208   1470836 2775659372   1% /var/lib/vz
10.255.255.234:/mnt/dataraid/backup/proxve/
                     1913556480    528896 1815824384   1% /mnt/pve/openfiler-backups
10.255.255.233:/mnt/dataraid/storage/flowdata
                      99083868    829264  93221440   1% /mnt/pve/netflow-data


proxve:/etc/vz/conf# cd /mnt/pve/netflow-data/

proxve:/mnt/pve/netflow-data# ls -la 102/
total 8
drwxr-xr-x 2 root root 4096 May  9 17:37 .
drwxr-xr-x 3 root root 4096 May  9 17:28 ..
-rw-r--r-- 1 root root    0 May  9 17:37 testfile
proxve:/mnt/pve/netflow-data#


Finally, to mention: I think this will work with any kind of storage; it doesn't have to
specifically be NFS. So if you had an iSCSI lun mounted onto your ProxVE physical host,
you could then mount storage from that onto various VMs, with exactly the same process
as used here.


Anyhow. That's about it. I hope this info is of use to someone else. If anyone figures out how to do a better version or improve on this, please post on this thread! :)


---Tim Chipman
Fortech I.T. Solutions
http://FortechITSolutions.ca
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!