NFS: How to use version 3?

gorbad

New Member
Sep 30, 2011
6
0
1
Hello,

I've got a cluster setup that I'd like to make use of a GlusterFS cluster via NFS. However, Gluster only supports NFS version 3. It looks like the kernel on PVE 2.0 defaults to nfs4. Is there a way to get PVE to use nfs version 3? I can manually mount the volume as version 3, but I don't see a way to pass mount options to PVE...

Thanks!
 
Ok, found the solution:
Add the line:
options vers=3

to /etc/pve/storage.cfg beneath my NFS entry.

It would be nice to have NFS options available via the GUI though :p
 
can you share how the GlusterFS cluster is setup?

I'm thinking of storage alternatives and could use some links/suggestions. so far this looks like a basis of something to try : http://www.bauer-power.net/2011/08/roll-your-own-fail-over-san-cluster.html

which I'd modify to use with prox 2.0 data storage.

I do not like separate hosts ffor prox and storage. prox 2.0 drbd for kvm, then gluster for data may be better then our current setup, which uses primary/secondary drbd for data .

my goal is to have high availability data for openvz's
 
Last edited:
Ok, here's what I've done. Warning though, I use Gluster for KVM image storage, and not for openvz (yet?).

First, get the most recent stable Gluster .deb file from http://www.gluster.org/download/. I'm using 3.2 right now. 3.3 is supposed to have improved performance as VM storage, but it's still beta. It'll also be interesting to see what's next now that Redhat has acquired Gluster. (It's either really really good, or really really bad...)

Install the .deb on all PVE hosts in your cluster:
Code:
# dpkg -i  glusterfs_3.2.4-1_amd64.deb

Create a directory on each host to store your gluster data (this is the raw stuff you don't access directly). It's better to put this on its own partition, but it doesn't have to be. In fact, in this example it's going to live on the same partition as PVE's default "local" storage:
Code:
# mkdir /var/lib/vz/gluster
# mkdir /var/lib/vz/gluster/vm-storage
(Replace /var/lib/vz with whatever the path is to your preferred storage location)

Start the glusterd service on all hosts:
Code:
# /etc/init.d/glusterd start

On ONE host, add the other hosts to the Gluster peer group:
Code:
# gluster peer probe 10.10.0.2
# gluster peer probe 10.10.0.3
# ....

Check your peer status:
Code:
# gluster peer status
They should be in State: Peer in Cluster (Connected)

Create your volume. Now, there are different types of volumes, and what you do depends on the number of hosts in your cluster and which you value most: performance, capacity, or reliability. See http://www.gluster.com/community/do...ster_3.2:_Setting_Up_GlusterFS_Server_Volumes for details and examples.

For my setup so far, I only have two machines in my cluster, so I'm going for a Replicated volume (basically just mirrored data). Create the volume:
Code:
# gluster volume create VOLNAME replica 2 transport tcp 10.10.0.10:/var/lib/vz/gluster/vm-storage 10.10.0.11:/var/lib/vz/gluster/vm-storage
VOLNAME is whatever you want to name your volume. This will also be your NFS share name.
(Here, my two hosts are 10.10.0.10 and 10.10.0.11. You can use hostnames if you'd like. I'm using IPs because I've dedicated specific NICs for storage, and put them on a private VLAN.)

Add some basic security, if desired:
Code:
# gluster volume set VOLNAME auth.allow 10.10.0.*

Start the volume:
Code:
# gluster volume start VOLNAME

Check it out:
Code:
# gluster volume info VOLNAME

You can test that it's working by manually mounting it via NFS. Note that with PVE 2.0, NFS defaults to version 4, but Gluster only does version 3.
Code:
# mount -t nfs -o vers=3 localhost:/VOLNAME /mnt


Once it works, you can add it as NFS storage to Gluster. The server name will be "localhost" and the export "VOLNAME". Make sure it's on all nodes, and that it's shared. Note that after you add it but before you use it, you have to add the vers=3 option (see my second post, above).

The last thing is making sure glusterd starts automatically. Do this on all hosts:
Code:
# [COLOR=#000000][FONT=monospace]update-rc.d glusterd defaults[/FONT][/COLOR]

Note that there might be an issue with the above in that glusterd might start after PVE. If this is an issue, you'll have to adjust the rc.d priority numbers to make sure gluster starts before pve and stops after it. I haven't got to that yet, but I think I've seen another thread or two about that on here...
 
Last edited:
now that as a great response!

and I think it is good the Redhat purchased the company, they have had a good track record of getting things like gluster into general open use. now if Oracle bought it I would not have asked anything about gluster.
In my humble opinion Ubuntu has been a help to Debian bug squashing , and Redhat to server technologies .
 
Thanks for the info on setting up glusterfs, this works great for storing ISO images.
Running a VM Image on gluster the IO was not so great.

Only had a problem with two of the commands:

Set the number of replicas:
Code:
# gluster volume create VOLNAME [B]replica 2 [/B]transport tcp 10.10.0.10:/var/lib/vz/gluster/vm-storage 10.10.0.11:/var/lib/vz/gluster/vm-storage

Test mount use -t for -f:
Code:
# mount [B]-t[/B] nfs -o vers=3 localhost:/VOLNAME /mnt

Thanks for the simple instructions!
 
Thanks for finding the typos! I corrected the post above.

I've found IO speed to be acceptable, but I haven't put it under serious load. GlusterFS 3.3 has greatly improved performance as a VM storage backend, but it's still in beta. Hopefully the Redhat acquisition will get finished up soon, and a final, stable version will be out...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!