[SOLVED] corosync questions - edit voting values

ilia987

Active Member
Sep 9, 2019
273
13
38
36
i want to add few more servers,

how i can change the voting of each server?
how i can change the minimal quorum value ?
 
In general by editing the /etc/pve/corosync.conf file. But why would you like to do that? Playing with these parameters can very easily lead to unexpected behavior!
 
i looked for this file, but i can only change the vote size for each node there,
i am looking to change the minimal total vote needed to achieve quorum.

we added few more servers, i want to update the change the weight on some of them
 
Code:
man corosync.conf

But as I said, the defaults of one vote per node and needing more than 50% of the votes for quorum are good and should not be tampered with. Experience shows, that this will sooner or later cause some problems down the line if the cluster does not behave as expected.
 
  • Like
Reactions: ilia987
Aa i remember on older version before we upgraded (i think it was v4 or v5) we had it hard-coded,

Can i assume now 50%?
 
It needs to be > 50% as exactly 50% is not the majority.
 
  • Like
Reactions: ilia987
Code:
man corosync.conf

But as I said, the defaults of one vote per node and needing more than 50% of the votes for quorum are good and should not be tampered with. Experience shows, that this will sooner or later cause some problems down the line if the cluster does not behave as expected.
Would you be so kind to elaborate on what kind of problems might occur.

The problems if a quorum <=50% of votes is allowed are obvious (e.g. split brain) - I would not touch that. But what can go wrong when the votes per node are changed?

I have a 4 node cluster, which by default would need 3 votes, so only 1 server might fail. Therefore I assigned 2 votes to the newest/best server. Now there are 5 total votes, so any two of the older servers might fail or the big server alone might fail. This is better for me, for example because not all of the old servers have hot-plug drives and redudant PSUs. So the chances are rather high one might fail while the other is down for maintenance.

In general, I was planning to expand the cluster in such a way that the newer/better server have a slightly higher voting power. So with 5 servers (2 new, 3 old) I would assign them 2-2-1-1-1 votes. With 6 servers (3 new, 3 old) it would be 2-2-2-1-1-1.

Is there any flaw in my thinking?
 
I am also having trouble with this situation and would like to understand this better. I have a home / small office scenario where I run 4 Proxmox servers. 2 of the 4 servers are usually off and 1 of those that is on needs to be really flexible.

The main thing I'm looking to achieve with a cluster is primarily twofold:
1) single login/interface management
2) migrate smaller experiment vms from one machine to another.

I'm not really looking for high availability since my primary uses of these systems are hardware locked.

I found an article here that seems to perhaps provide a way to work as not intended:

Achieving Quorum in a Two Node Proxmox Cluster


But is this really the way to do it?

I too am wondering about "what kind of problems might occur" since it seems that this is the mantra, but for some of us we see no other way......
 
Have you looked at the QDevice mechanism to give the cluster a 3rd vote? https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_corosync_external_vote_support

The external machine used to install corosync-qnet on could be a raspberry pi or another machine you have around and all it does, is to give another vote to the cluster, and should both nodes still be up, but not able to talk to each other, roll the dice on which node gets the vote.
 
I am also having trouble with this situation and would like to understand this better. I have a home / small office scenario where I run 4 Proxmox servers. 2 of the 4 servers are usually off and 1 of those that is on needs to be really flexible.

The main thing I'm looking to achieve with a cluster is primarily twofold:
1) single login/interface management
2) migrate smaller experiment vms from one machine to another.

I'm not really looking for high availability since my primary uses of these systems are hardware locked.

I found an article here that seems to perhaps provide a way to work as not intended:

Achieving Quorum in a Two Node Proxmox Cluster


But is this really the way to do it?

I too am wondering about "what kind of problems might occur" since it seems that this is the mantra, but for some of us we see no other way......
Do I understad you correctly that you have
- 2 Servers which are mostly turned off
- 1 big/important server
- 1 normal server

In this situation you could add a QDevice as aaron suggested. This is probably the best solution.

You could also change the voting power. For example you could give the big server a voting power of 4 and the other trhee servers 1 each for a total of 7 votes. Which means the big server alone has to be working. If all other fail, the big server continues to work and has a voting majority. If the big server fails, your other are useless. Meaning they do not have a majority, they can not start, stop or change VMs etc. So, recovering from a failure of the big server is complicated.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!