Two Node high availability cluster on Proxmox 4.x

That will join the cluster. If you have HA configured to auto-migrate VM's then you will want to set up HA groups to exclude this node from being used for those VMs.

I also use that third node as quorum for gluster, so I manually added it to the gluster configuration as well.
 
  • Like
Reactions: mircsicz
I can't live migrate on gluster... Would you mind sharing your gluster config?

And how did you add a machine to gluster without using it as a gluster-storage node?

EDIT: I did my setup right after the 4.0 release, by that time my gluster live-migration test didn't succed. Just tested it again, this time it worked... But I still can't imagine how to add a third device without storage!
 
Last edited:
I have three nodes: pve0, pve1, pve2. pve0 is my "small" data-less server for quorum. The "pve0-cluster" host name pattern is for the LAN dedicated to the disk traffic.

Code:
# on pve2:
gluster peer probe pve1-cluster
# on pve1:
gluster peer probe pve2-cluster

# see the status of the cluster peers
gluster peer status

# on pve1:
gluster peer probe pve0-cluster


on pve1/2:

zfs create tank/gluster
mkdir /tank/gluster/brick

on pve1:
gluster volume create datastore replica 2 transport tcp pve1-cluster:/tank/gluster/brick pve2-cluster:/tank/gluster/brick
gluster volume start datastore
gluster volume info
# create /var/lib/glusterd/groups/virt as per Virt-store-usecase from wiki.
gluster volume set datastore group virt
 
Testing with "two_node=1" corosync

# pvecm status
Quorum information
------------------
Date: Wed May 18 14:13:27 2016
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1572
Quorate: Yes

Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 1
Flags: 2Node Quorate WaitForAll

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 x.x.x.x (local)
0x00000002 1 x.x.x.x

# cat /etc/pve/corosync.conf
....
quorum {
provider: corosync_votequorum
two_node: 1
}
....

# cat /var/log/syslog |grep corosync
May 17 14:50:06 corosync[1896]: [MAIN ] Corosync Cluster Engine ('2.3.5.15-e2b6b'): started and ready to provide service.
May 17 14:50:06 corosync[1896]: [MAIN ] Corosync built-in features: augeas systemd pie relro bindnow
May 17 14:50:06 corosync[1897]: [TOTEM ] Initializing transport (UDP/IP Multicast).
May 17 14:50:06 corosync[1897]: [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
May 17 14:50:06 corosync[1897]: [TOTEM ] The network interface [x.x.x.x] is now up.
May 17 14:50:06 corosync[1897]: [SERV ] Service engine loaded: corosync configuration map access [0]
May 17 14:50:06 corosync[1897]: [QB ] server name: cmap
May 17 14:50:06 corosync[1897]: [SERV ] Service engine loaded: corosync configuration service [1]
May 17 14:50:06 corosync[1897]: [QB ] server name: cfg
May 17 14:50:06 corosync[1897]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
May 17 14:50:06 corosync[1897]: [QB ] server name: cpg
May 17 14:50:06 corosync[1897]: [SERV ] Service engine loaded: corosync profile loading service [4]
May 17 14:50:06 corosync[1897]: [QUORUM] Using quorum provider corosync_votequorum
May 17 14:50:06 corosync[1897]: [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2
May 17 14:50:06 corosync[1897]: [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5]
May 17 14:50:06 corosync[1897]: [QB ] server name: votequorum
May 17 14:50:06 corosync[1897]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3]
May 17 14:50:06 corosync[1897]: [QB ] server name: quorum
May 17 14:50:06 corosync[1897]: [TOTEM ] A new membership (x.x.x.x:1568) was formed. Members joined: 1
May 17 14:50:06 corosync[1897]: [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2
May 17 14:50:06 corosync[1897]: [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2
May 17 14:50:06 corosync[1897]: [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2
May 17 14:50:06 corosync[1897]: [QUORUM] Members[1]: 1
May 17 14:50:06 corosync[1897]: [MAIN ] Completed service synchronization, ready to provide service.
May 17 14:50:06 corosync[1874]: Starting Corosync Cluster Engine (corosync): [ OK ]
May 17 14:50:56 corosync[1897]: [TOTEM ] A new membership (x.x.x.x:1572) was formed. Members joined: 2
May 17 14:50:56 corosync[1897]: [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2
May 17 14:50:56 corosync[1897]: [QUORUM] This node is within the primary component and will provide service.
May 17 14:50:56 corosync[1897]: [QUORUM] Members[2]: 1 2
May 17 14:50:56 corosync[1897]: [MAIN ] Completed service synchronization, ready to provide service.

It seems to work...
 
Testing with "two_node=1" corosync

# pvecm status
Quorum information
------------------
Date: Wed May 18 14:13:27 2016
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1572
Quorate: Yes

Not exactly sure what you're trying to show here. If you unplug the cable between these two boxes, do they both think they're in charge? That is nonsense. There is no way to do HA without an odd number of nodes voting.
 
You have to see the next question.
At times when inserted new hosts other has the habit of restarting. Ai when sending the vm to a new host they lose connection with the storage, having to turn off the vm and then on again to make it work.
 
two-node HA-cluster without effective fencing is nonsense and down right dangerous. So how have you implemented fencing and stonith?

with pacemaker (jessie-backports) and fence_ipmilan (in process)

# crm_mon -1
Last updated: Wed May 18 17:08:30 2016 Last change: Wed May 18 16:50:16 2016 by hacluster via crmd on backup02
Stack: corosync
Current DC: backup02 (version 1.1.14-70404b0) - partition with quorum
2 nodes and 0 resources configured

Online: [ backup01 backup02 ]
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!