Ok, here's what I've done. Warning though, I use Gluster for KVM image storage, and not for openvz (yet?).
First, get the most recent stable Gluster .deb file from
http://www.gluster.org/download/. I'm using 3.2 right now. 3.3 is supposed to have improved performance as VM storage, but it's still beta. It'll also be interesting to see what's next now that Redhat has acquired Gluster. (It's either really really good, or really really bad...)
Install the .deb on all PVE hosts in your cluster:
Code:
# dpkg -i glusterfs_3.2.4-1_amd64.deb
Create a directory on each host to store your gluster data (this is the raw stuff you don't access directly). It's better to put this on its own partition, but it doesn't have to be. In fact, in this example it's going to live on the same partition as PVE's default "local" storage:
Code:
# mkdir /var/lib/vz/gluster
# mkdir /var/lib/vz/gluster/vm-storage
(Replace /var/lib/vz with whatever the path is to your preferred storage location)
Start the glusterd service on all hosts:
Code:
# /etc/init.d/glusterd start
On ONE host, add the other hosts to the Gluster peer group:
Code:
# gluster peer probe 10.10.0.2
# gluster peer probe 10.10.0.3
# ....
Check your peer status:
They should be in State: Peer in Cluster (Connected)
Create your volume. Now, there are different types of volumes, and what you do depends on the number of hosts in your cluster and which you value most: performance, capacity, or reliability. See
http://www.gluster.com/community/do...ster_3.2:_Setting_Up_GlusterFS_Server_Volumes for details and examples.
For my setup so far, I only have two machines in my cluster, so I'm going for a Replicated volume (basically just mirrored data). Create the volume:
Code:
# gluster volume create VOLNAME replica 2 transport tcp 10.10.0.10:/var/lib/vz/gluster/vm-storage 10.10.0.11:/var/lib/vz/gluster/vm-storage
VOLNAME is whatever you want to name your volume. This will also be your NFS share name.
(Here, my two hosts are 10.10.0.10 and 10.10.0.11. You can use hostnames if you'd like. I'm using IPs because I've dedicated specific NICs for storage, and put them on a private VLAN.)
Add some basic security, if desired:
Code:
# gluster volume set VOLNAME auth.allow 10.10.0.*
Start the volume:
Code:
# gluster volume start VOLNAME
Check it out:
Code:
# gluster volume info VOLNAME
You can test that it's working by manually mounting it via NFS. Note that with PVE 2.0, NFS defaults to version 4, but Gluster only does version 3.
Code:
# mount -t nfs -o vers=3 localhost:/VOLNAME /mnt
Once it works, you can add it as NFS storage to Gluster. The server name will be "localhost" and the export "VOLNAME". Make sure it's on all nodes, and that it's shared. Note that after you add it but before you use it, you have to add the vers=3 option (see my second post, above).
The last thing is making sure glusterd starts automatically. Do this on all hosts:
Code:
# [COLOR=#000000][FONT=monospace]update-rc.d glusterd defaults[/FONT][/COLOR]
Note that there might be an issue with the above in that glusterd might start after PVE. If this is an issue, you'll have to adjust the rc.d priority numbers to make sure gluster starts before pve and stops after it. I haven't got to that yet, but I think I've seen another thread or two about that on here...