Cluster with GlusterFS doesn't work fine

Romain DARBON

New Member
Jan 23, 2019
3
0
1
26
Hi,

I would like to configure a 2 nodes cluster with GlusterFS stockage. I know that it isn't a good Idea and i sould have 2 nodes but I doen't want HA, I only would like to can manually start a VM on the second node if the first fail.

So I configured my cluster with 2 nodes, and I created a glusterfs storage between both nodes. I can show it in both nodes and I can configure a VM with this storage.

But I meet a problem when one of nodes fails. The storage switch to "Active : No" in the promox interface whereas it's active in the shell (I check with the command "gluster volume status HDD-GLUSTER".

First i thinked that it's because i hadn't the quorum so i set the quorum votes to 2 in the first node. So when I stop the second node, thet first have still the quorum. But the gluster volume isn't active in proxmox.

Thanks a lot for your help,
Sorry for m'y bad english i'm french,
Best regards,
Romain.
 
I would like to configure a 2 nodes cluster with GlusterFS stockage. I know that it isn't a good Idea and i sould have 2 nodes but I doen't want HA, I only would like to can manually start a VM on the second node if the first fail.

So I configured my cluster with 2 nodes, and I created a glusterfs storage between both nodes. I can show it in both nodes and I can configure a VM with this storage.

But I meet a problem when one of nodes fails. The storage switch to "Active : No" in the promox interface whereas it's active in the shell (I check with the command "gluster volume status HDD-GLUSTER".

First i thinked that it's because i hadn't the quorum so i set the quorum votes to 2 in the first node. So when I stop the second node, thet first have still the quorum. But the gluster volume isn't active in proxmox.

Such a setup is not recommended at all; if you want to continue working when one node fails install at least a 3-node Cluster. Put a third node to your cluster, can be a low-end machine (you have it just for quorum then)

However, if you really want to try it with a 2-node cluster:

You have to set Quorum to 1 in such a case:
Code:
pvecm e 1

And observe if quorum is (and remains) ok by
Code:
pvecm status
 
Hi Richard,

I think it's more reasonable to work with a third node. I tried with "pvecm e 1" but nothing change in "pvecm status".

When I shutdown one node, the last one change state to "Quorate : No".

It's very strange !

Thanks for your help,
Romain.
 
Hi Richard,

I think it's more reasonable to work with a third node. I tried with "pvecm e 1" but nothing change in "pvecm status".

When I shutdown one node, the last one change state to "Quorate : No".

Code:
pvecm e 1

has only temporarily effect; i.e. as soon as something changes in cluster state it's set back to default (which is 2 in case of a 2 or 3 node cluster).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!