PVE quorum on cluster

dqq

Active Member
Jan 30, 2020
34
1
28
54
Hi,

Basically what I want to achieve is having 2 node cluster with offsite backups for redundancy. In case of node 1 failing, I will restore backups on node 2 and proceed using it.

The problem is Quorum on node 2 when node 1 is dead - node 2 does not have a quorum and therefore cannot start any VM.

I know there are 2 solutions:
1) Setting pvecm e 1 manually on node 2 for it to be able to vote for itself
2) Setting QDevice so that, node 1 and node 2 can vote on themselves.


And now I have 2 questions:
1) Which solution is better in long run?
2) How can I set up scenario number 2? (I have only read about it)
3) What to do with pvecm 1 when node 1 goes back online? Should I set it off somehow?
 
Here I can see, that adding third node such as RPI is not the suggested and "I should simply use QDevice" - https://pve.proxmox.com/wiki/Raspberry_Pi_as_third_node
This talks about using the regular Corosync services on an RPI. The QDevice is an additional service that has two components.

On the PVE nodes, you have the corosync-qdevice which talks to the arbitrator host to get a third vote into the cluster. This does not need to have the same low latency as the actual PVE hosts.

The corosync-qnetd service that must run outside the cluster on the arbitrator device. This can be anything as long as you can install that service and ssh into it. It could be on an RPI. I personally run a 2node cluster with a QDevice. The arbitrator node is an LXC container running on another server that is not part of the Proxmox VE cluster.

Regarding a failing arbitrator node, you have to consider your setup. How likely is it that the arbitrator node will fail with one of your PVE nodes?
This goes down to how the power is delivered to all nodes as well as the switches connecting them.

It is always a weighing of effort and infrastructure against the likelihood of failures. In a regular setup where you might want to take down one of the PVE nodes for a while (hardware maintenance for example) it is definitely easier to handle the cluster with a real third vote than messing around with expected votes and such.
 
HI,

How well does n Qdevice work with n 6node cluster? Will it allow 3 nodes being down? We have 6 nodes,
3 in one rack of a Datacentre and 3 in another Rack in a another Datacentre,

The 2 racks are connected via Fibre,

So will a Qdevice allow 1 Datacentre to go offine and still survive ?
 
HI,

How well does n Qdevice work with n 6node cluster? Will it allow 3 nodes being down? We have 6 nodes,
3 in one rack of a Datacentre and 3 in another Rack in a another Datacentre,

The 2 racks are connected via Fibre,

So will a Qdevice allow 1 Datacentre to go offine and still survive ?
please don't post the same question across multiple similar old threads - it's a waste of time and resources.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!