Two Node high availability cluster on Proxmox 4.x

Merijn

New Member
Oct 8, 2015
2
0
1
Hello,

Since Proxmox 4.x don't use the cluster.conf anymore i'd like to know the steps how to create a two-node high availability cluster on Proxmox 4.x. The following documentation is not applicable anymore:
pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster

I especially just want to know how to say that expected votes can be 1.

Thanks in advance,
 
Hello Tom,

I understand that this is the preferred way of creating a HA cluster. But in Proxmox 3.x we were able to also create a two node cluster. This is not supported anymore in Proxmox 4.x?

Kind regards
 
Hello Tom,

I understand that this is the preferred way of creating a HA cluster. But in Proxmox 3.x we were able to also create a two node cluster. This is not supported anymore in Proxmox 4.x?

Kind regards

qdisk support is dropped in corosync 2. I guess because it makes more problems than it solves.
 
It would be *really* cool if you could use a tiny box like a Raspberry Pi as the third node for the quorum vote. Both for energy efficiency and space considerations. I personally don't know what packages are necessary to make that happen, or if they would even build on a Pi.
 
accually that was a great feature.
With proxmox 3 VE i am making very nice setups:

- 2x hardware dell servers (2x 5000 $)
- 1x cheap (500$) as backup space and for QUORUM(instead of third node)
each server with raid 10 using 4 SSD drives
ssd space mirrored on both servers via DRBD
live migration, Hight Avaibility working very well and accually saved me once when one of my dells burned :D

DRBD saves like 15'000+ $
NAS as quorum saves like 5000$ as (third) server cost.

I thought that i will make beautifull that kind of setup on Proxmox VE 4 .... but not :p
So i am waiting for solution, in meanwhile i am installing Proxmox 3 on next setups.

Another thing is "micro checkpointing" what is missing in proxmox - if this will be implemended in proxmox - there will be no need to use vsphere anymore with Fault Tolerance (that would save us a looot of money :)

Keep working on disk quorum :) (qdisk)
 
Last edited:
I have the same problem, I have two servers and can not haver HA, you have managed to do this with 2 servers? You can get a much smaller server and the third server? there is no way of HA with 2 servers? I find it absurd to have to have 3 expensive servers for this function.
 
i think you can use 2 powerfull and third only for quorum, and then you set drbd on dwo of them or ceph on two and thirs as monitor for ceph (dont know if that will work)

Anyway that could work i suppose but i would rather like NAS as qdisk as it will be more stable than cheap "little pc" :p
 
Anyway that could work i suppose but i would rather like NAS as qdisk as it will be more stable than cheap "little pc" :p

Hmm, I wouldn't say more stable, there's a reason this feature doesn't exists in corosync 2 anymore.

As long as the "cheap little" pc can provide votes, which is really easy, it's a very stable solution, IMHO. You can exclude it from HA groups so there won't migrate a machine to it.
And, you could host the one or other VM there also (e.g.: in emergency cases), I see more practical use also in this method vs the qdisk.
 
It looks like corosync 2 does have support for 2 node?

totem {
version: 2
secauth: off
cluster_name: mycluster
transport: udpu
}

nodelist {
node {
ring0_addr: pcmk-1
nodeid: 1
}
node {
ring0_addr: pcmk-2
nodeid: 2
}
}

quorum {
provider: corosync_votequorum
two_node: 1
}

logging {
to_syslog: yes
}


** I may be reading old docs, but the above example config seems to apply to corosync 2.0 but I have not tested it yet. **
 
Last edited:
- A guide of the new corosync version here (with the option "two_node" and much more things):
http://people.redhat.com/ccaulfie/docs/Votequorum_Intro.pdf

- The example of notfixingit is from here (of Pacemaker 1.1):
http://clusterlabs.org/doc/en-US/Pa...m_Scratch/_sample_corosync_configuration.html

- More examples (Corosync2 with two and three nodes):
http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/_enabling_pacemaker.html

Also, i am interesting in know if the option "two_node" works in PVE 4.x., so please, if anybody has tested, comment it here with the lines of the configuration.
 
Last edited:
Any one who can confirm that this is an acceptable workaround (give one node 2 votes)

Code:
#on node 1
pvecm create -votes 2 cluster #to create the cluster with this server registering as 2 votes
pvecm status # this should show the local node having 2 votes
#on node 2
pvecm add <ip of first node>
 
Last edited:
I'ld love to see a solution based on a Raspi, ALIX/APU or any other ATOM or whatsoever small carbon-footprint machine...

Is there a chance that we get something like that?

I thought about installing only pve-ha-lrm & pve-ha-crm on an APU or an ATOM I've in lab, but "pve-ha-manager" has way to many dependencies...
 
  • Like
Reactions: vkhera
Is there a chance that we get something like that?

I thought about installing only pve-ha-lrm & pve-ha-crm on an APU or an ATOM I've in lab, but "pve-ha-manager" has way to many dependencies...

Don't install the pve-ha-manager package, install the pve-cluster one. This hasn't many dependencies (corosync - widely available, alstough you better use the pve one else you surely break something, also libpve-common, everything else is normal stuff available from debian itself) and that is enough. You only need a corosync/pmxcfs installation.

The ha-manager would be only needed if you want to manage VM/CTs on this node, but you only use it as tie breaker/voter so no need for that.

If you use a amd64 arch you may just include the pve repo and install pve cluster from there. For arm (raspi) or x86 I may try a cross build on a lame weekend, although this will then be as unofficial as it can get and for testing purpose only. :)

Not really official supported (or better said tested) so you do not heard it from me ;)
I do not take any responsibility, or better said proxmox supports amd64/x86_64 only.
 
Last edited:
I thought about doing it on an APU board, but ended up ruling that out and going with a bare-bones C2758 (atom based) system and a small SSD. I installed a full pve node but do not run any data or VMs on it. I tried running a VM on it for a while, and it was actually pretty acceptably fast.
 
I got some ATOM DIN-rail mountable system's on spare, so I'll try with that...

But how do I configure it, is it only:

pvecm add master-ip

or is there more steps needed?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!