TWO Node Cluster with expected_votes:1

jasminj

Member
Sep 27, 2014
43
0
6
Vienna 19
jasmin.anw.at
Hi!

I have a TWO node cluster (NO HA!) with DRBD8 backed virtual disks.
Each virtual disk has its own DRBD8 instance and DRBD8 is running in single primary mode.
I wrote a Proxmox Storage Plugin to activate the DRBD8 volume and to make it primary only, when the VM is started.

Today I was testing fail-over scenarios and all worked fine till the point when only one node is booting, while the other is off. My test VM didn't start because the default expected votes in corosync is the number of nodes in nodelist. In my case 2. Because the other node wasn't seen, it reached never the expected node count and didn't start my VM.

After some googling I found a solution. I added "two_node: 1" and "expected_votes: 1" in the "quorum" section of /etc/pve/corosync.conf. With this configuration the single note was able to auto-start my VM.

Is there any impact on my cluster which I currently don't see and which might lead to a non functional cluster or other problems?

I could omit the "expected_votes:1" and use "pvecm expected 1" in case of an emergency situation also, but then it would need always manual intervention to get the VMs running when one of the nodes can't boot.
And yes, this is an unlikely situation in a centre but there could be a hardware fault (my machines are several years old already).

BR,
Jasmin
 

dietmar

Proxmox Staff Member
Staff member
Apr 28, 2005
16,491
315
83
Austria
www.proxmox.com
After some googling I found a solution. I added "two_node: 1" and "expected_votes: 1" in the "quorum" section of /etc/pve/corosync.conf. With this configuration the single note was able to auto-start my VM.
This is extremely dangerous because it can lead to split brain, so I would never do that.
We always suggest at least 3 nodes if you want HA. If you do not need HA, you can set
expected votes manually (after making sure the other node is offline).
 

jasminj

Member
Sep 27, 2014
43
0
6
Vienna 19
jasmin.anw.at
This is extremely dangerous because it can lead to split brain, so I would never do that.
Do you mean split brain in DRBD8 or Proxmox?
If it is the latter, what parts are effected by that?
If you mean DRBD8 then this can't happen in single primary mode, as long as the DRBD8 sync connection between the servers is working.

We always suggest at least 3 nodes if you want HA.
It is on purpose to have the cluster without HA in my setup, because this would need one node more and switches and four NICs ... . More components to fail.

If you do not need HA, you can set
expected votes manually (after making sure the other node is offline).
Which I did to recover from this problem. But as I wrote, this will require manual intervention to let the VMs on the good machine start.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!