How to Deploy a 2 Node Cluster on ProxMox 4.x VE

rsmvdl

Member
Jul 15, 2016
32
5
13
32
Hello everybody,

I'm writing this short article because I spend myself a lot of time by finding the right configuration to get this working.
First of all: A 2 Node Cluster is not a very good way to create High-Availability or even Fail-over scenarios because corosync is not able to create a proper quorum between the node(s). Anyway if you just want to play a bit with ProxMox and migrate machines and containers this is propaply a good solution if you only have 2 nodes.

1. Make sure to have a least 2 NICs on your Nodes. the First is Public (vmbr0) and the second (eth1) privat network.

edit the /etc/hosts file like this:

127.0.0.1 localhost
10.91.169.241 hv01.mydomain.com hv01
10.91.169.244 hv02.mydomain.com hv02

do the same on the other node!

2. Make sure that both node can ping each other! Also make sure that unicast or multicast is available on your network. This can be tested with the tool "omping". But this is a different chapter. We will get later on with this!

3.

Create your cluster on "hv01":

$root@hv01:~ pvecm create abcluster

Now stop all cluster services on hv01:
$root@hv01:~ service corosync stop
$root@hv01:~ service pve-cluster stop

edit the following file on hv01: /etc/corosync/corosync.conf
that it look like this:

_____________________

logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: hv01
nodeid: 1
quorum_votes: 1
ring0_addr: hv01
}

node {
name: hv02
nodeid: 2
quorum_votes: 1
ring0_addr: hv02
}

}

quorum {
expected_votes: 1
provider: corosync_votequorum
two_node: 1
}

totem {
cluster_name: abcluster
transport: udpu
config_version: 2
ip_version: ipv4
secauth: on
version: 2
interface {
bindnetaddr: 10.91.169.241
ringnumber: 0
}

}


_____________________

the option " transport: udpu" is only needed if your network does not support multicast. instead you can use unicast or "udpu" protocol.

now restart the cluster services:

$root@hv01:~ service corosync start
$root@hv01:~ service pve-cluster start

4.

on the second node (hv02) exec. the following cmd to add the node to the cluster:

$root@hv01:~ pvecm add hv01

that's it! if your nodes do not show up in the datacenter, try to restart both of the 2 nodes.


Hope i was able to help!

Robin

P.S. you can use this config to work on the RPN network of online.net
 
Last edited:
Setting expected_votes to 1 is a really bad idea.
 
Setting expected_votes to 1 is a really bad idea.
You are 100% right with this! Using corosync with less then 3 nodes is not a good idea... but if you just want to migrate vm and/or containers this is a working solution.
 
Hi tom and rsmvdl,
This is just my case. I have a two-node proxmox cluster:
- 1 New server Xeon 2630, 64Gb RAM, 2 x 1.2Tb SAS, 2 x 200Gb SSD.
- 1 Old server: Xeon 5150, 8Gb RAM, 2 x 150Gb SAS, 2 x 300Gb SAS.

I just builded the cluster with the intention of using the old server for VM backups. Don't need HA nor live migration. Just have Node 1 VM's backups on Node 2, and maybe offline migration between nodes.

Can you suggest me the way to proceed?
 
Last edited:
Setting expected_votes to 1 is a really bad idea.

Also if there is no HA required?
Lets say it is only needed to share the vm between two nodes and for reach load balancing.
If it is made with two Volumgroups LVM. One is running on each node and replicating to the other.
LVM VG 1 (Node A 3 vm running and replicating to Node B)
LVM VG 2 (Node B 3 vm running and replicating to Node A)
If one node fails it is planed to bring up the VMs manually on the other Node which was passive before.
In this case it would be possible to work with all vms on one node(Only with no quorum).
Now i could bring a new node back to the cluster and will configure this like the failed one.
And go back to the wished load balancing scenario.

Would it be in this case also an bad idea to disable quorum??
If yes why? Where would be the risk?

Many thanks for your reply

regards

sycoriorz
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!