Hello everybody,
I'm writing this short article because I spend myself a lot of time by finding the right configuration to get this working.
First of all: A 2 Node Cluster is not a very good way to create High-Availability or even Fail-over scenarios because corosync is not able to create a proper quorum between the node(s). Anyway if you just want to play a bit with ProxMox and migrate machines and containers this is propaply a good solution if you only have 2 nodes.
1. Make sure to have a least 2 NICs on your Nodes. the First is Public (vmbr0) and the second (eth1) privat network.
edit the /etc/hosts file like this:
127.0.0.1 localhost
10.91.169.241 hv01.mydomain.com hv01
10.91.169.244 hv02.mydomain.com hv02
do the same on the other node!
2. Make sure that both node can ping each other! Also make sure that unicast or multicast is available on your network. This can be tested with the tool "omping". But this is a different chapter. We will get later on with this!
3.
Create your cluster on "hv01":
$root@hv01:~ pvecm create abcluster
Now stop all cluster services on hv01:
$root@hv01:~ service corosync stop
$root@hv01:~ service pve-cluster stop
edit the following file on hv01: /etc/corosync/corosync.conf
that it look like this:
_____________________
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: hv01
nodeid: 1
quorum_votes: 1
ring0_addr: hv01
}
node {
name: hv02
nodeid: 2
quorum_votes: 1
ring0_addr: hv02
}
}
quorum {
expected_votes: 1
provider: corosync_votequorum
two_node: 1
}
totem {
cluster_name: abcluster
transport: udpu
config_version: 2
ip_version: ipv4
secauth: on
version: 2
interface {
bindnetaddr: 10.91.169.241
ringnumber: 0
}
}
_____________________
the option " transport: udpu" is only needed if your network does not support multicast. instead you can use unicast or "udpu" protocol.
now restart the cluster services:
$root@hv01:~ service corosync start
$root@hv01:~ service pve-cluster start
4.
on the second node (hv02) exec. the following cmd to add the node to the cluster:
$root@hv01:~ pvecm add hv01
that's it! if your nodes do not show up in the datacenter, try to restart both of the 2 nodes.
Hope i was able to help!
Robin
P.S. you can use this config to work on the RPN network of online.net
I'm writing this short article because I spend myself a lot of time by finding the right configuration to get this working.
First of all: A 2 Node Cluster is not a very good way to create High-Availability or even Fail-over scenarios because corosync is not able to create a proper quorum between the node(s). Anyway if you just want to play a bit with ProxMox and migrate machines and containers this is propaply a good solution if you only have 2 nodes.
1. Make sure to have a least 2 NICs on your Nodes. the First is Public (vmbr0) and the second (eth1) privat network.
edit the /etc/hosts file like this:
127.0.0.1 localhost
10.91.169.241 hv01.mydomain.com hv01
10.91.169.244 hv02.mydomain.com hv02
do the same on the other node!
2. Make sure that both node can ping each other! Also make sure that unicast or multicast is available on your network. This can be tested with the tool "omping". But this is a different chapter. We will get later on with this!
3.
Create your cluster on "hv01":
$root@hv01:~ pvecm create abcluster
Now stop all cluster services on hv01:
$root@hv01:~ service corosync stop
$root@hv01:~ service pve-cluster stop
edit the following file on hv01: /etc/corosync/corosync.conf
that it look like this:
_____________________
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: hv01
nodeid: 1
quorum_votes: 1
ring0_addr: hv01
}
node {
name: hv02
nodeid: 2
quorum_votes: 1
ring0_addr: hv02
}
}
quorum {
expected_votes: 1
provider: corosync_votequorum
two_node: 1
}
totem {
cluster_name: abcluster
transport: udpu
config_version: 2
ip_version: ipv4
secauth: on
version: 2
interface {
bindnetaddr: 10.91.169.241
ringnumber: 0
}
}
_____________________
the option " transport: udpu" is only needed if your network does not support multicast. instead you can use unicast or "udpu" protocol.
now restart the cluster services:
$root@hv01:~ service corosync start
$root@hv01:~ service pve-cluster start
4.
on the second node (hv02) exec. the following cmd to add the node to the cluster:
$root@hv01:~ pvecm add hv01
that's it! if your nodes do not show up in the datacenter, try to restart both of the 2 nodes.
Hope i was able to help!
Robin
P.S. you can use this config to work on the RPN network of online.net
Last edited: