3 node cluster (2 of them use glusterfs) supported?

copymaster

Member
Nov 25, 2009
183
0
16
Hi

As my tests went on and i visited the Cebit yesterday, i think my actual setup is the best i can get with the hardware i have.

Dear Proxmox team, please answer the questions below:

1) I have a proxmox cluster with 3 nodes, 2 of them are really big (2 XeonV3 12 core, 256GB RAM, 1 Raid 1 for OS 500GB, 1 Raid 5 for gluster 16 TB) and a third node with just Xeon v2 8 core, 96 GB RAM and 500 GB for OS.

I want to run about 30 VM's on these 3 nodes, the nodes 1+2 have a glusterfs which is a replica 2 and houses all the vm's.

If one of the glusternodes fails, the vm's are gone but the glusterfs works on. I can then move the saved configs to the other node and restart them, if i am right.

Is such a setup supported, or do we get support in case of failure (which likely never occurs)?

What subscription would you recommend?

Sense of this setup is to use as less servers as possible and to have the storage failsave. I know that when a node fails, the vms are gone but we can manually move config to the remaining node and start vms again. this downtime is accepted from "above".

Is there another (better,recommended) setup possible with redundant storage hosted on two of the nodes?

Thank you
 
I'm not a Proxmox developer but I see no reason why this wouldn't work.

Proxmox and glusterfs are two different things. Glusterfs provides an underlying filesystem (which can be replicated, spread over multiple storage partitions, etc) and Proxmox merely makes use of it for storing stuff.

Just make sure if your node (1) goes down, it doesn't come back up and starts a second instance of the same VM which is now running on another node (2). This could lead to unpredictable behavior as two VMs are writing to same disk image that gets synced via glusterfs. Best to use fencing for this sort of stuff.


Speaking of your glusterfs setup. Can you share some quick benchmark results of its performance? Running something as simple as this will be good enough:

dd if=/dev/zero of=/glusterfs/mount/point/test.file bs=1M oflag=dsync count=200
 
Last edited:
you will not totally happy with glusterfs. main issue: a reboot will lead to extremely high load to get your VM virtual disk in sync again. glusterfs is not optimized for such a setup.

my personal favorites:
- for single hosts with the: zfs with SSD cache
- for small clusters with need of replicated storage: Ceph with SSD only or with DRBD9 (only in Proxmox VE 4.0, beta expected in Q2/2015).

if you can´t change the hardware above, start with just local raid storage and think of moving to 4.0 with DRBD9 as soon as available. lets hope that DRBD9 will be stable soon, including the new Proxmox VE integration. If you can´t wait and you need replicated storage now, go with ceph (but then you need more boxes/hardware)
 
dd if=/dev/zero of=/glusterfs/mount/point/test.file bs=1M oflag=dsync count=200


Hello, I was searching glusterfs threads... using 4 ssd's on zfs raidz1 . no cache / log drive for zfs . Glusterfs on top of zfs:
Code:
# dd if=/dev/zero of=/mnt/pve/gfs/test.file bs=1M oflag=dsync count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 2.63107 s, 79.7 MB/s
 
Last edited:
1] 16TB RAID5 - BIG BIG NO, if there is the other way - R6, R10...
2] 2x glusterfs node - without arbiter or dummy node, if one node drop, second node will be read-only state - its feature against split brain scenario. Read documentation!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!