Two Nodes, Unicast Cluster with 3.1

laurent.garces

New Member
Mar 7, 2014
15
0
1
Hi,

I'm trying to setup cleanly a two nodes unicast cluster with version 3.1 as I only want centralised controle and VM/CT migrations.
I had already done this with version 2.3 (don't remember exactly the steps) but it seems to be a little bit different with 3.1.
Here are the steps I used:

- Install a new Proxmox 3.1 server
- pvecm create <cluster name>
- cp /etc/pve/cluster.conf /etc/pve/cluster.conf.new
- nano /etc/pve/cluster.conf.new:
<?xml version="1.0"?><cluster name="clustername" config_version="2">
<cman keyfile="/var/lib/pve-cluster/corosync.authkey" transport="udpu" two_node="1" expected_votes="1">
</cman>
<clusternodes>
<clusternode name="proxmoxserver1" votes="1" nodeid="1"/>
</clusternodes>
</cluster>
- Than I activate the new configuration via the Web interface/HA

That seems to be ok but if I do service cman restart, I get:

Starting cman... two_node set but there are more than 2 nodescman_tool: corosync daemon didn't start Check cluster logs for details

Then I cannot modify the cluster.conf anymore.
The only way I've found to go back is to destroy the cluster
sudo /etc/init.d/pve-cluster stop
sudo /etc/init.d/cman stop
sudo rm /etc/cluster/cluster.conf
sudo rm -rf /var/lib/pve-cluster/*
sudo /etc/init.d/pve-cluster start

Is there a better way to go back in case of error?
Must I need two add the other node before set the two_node="1" option?
Is there some clear documentation somewhere that explain how to do this for 3.1?
 
Hi Tom,

Thanks for your answer.
I thought it was clear in the description of the steps I used but may be not. So yes I tried to follow the instructions on this page
(and http://pve.proxmox.com/wiki/Fencing#General_HowTo_for_editing_the_cluster.conf). I've forget to mention that I've added both hosts names on both /etc/hosts files.
The difference for me is that I added: two_node="1" expected_votes="1" options, as I want only a two node cluster and avoid quorum issues.
This seems to make sense to me and this is how I did it with Proxmox 2.3 but this seems to confuse cman in 3.1. When I restarted it I got the error:
"two_node set but there are more than 2 nodes".
The things that seems wrong to me are:
- At this step there is only one node in the cluster as I'm building it, so the error message seems to be wrong (not really important BTW)
- I cannot modify the cluster.conf anymore. I have to destroy the cluster to go back to a valid state. This is not really reassuring about
the reliability of the future cluster...
- I would like to have a clear procedure about how to cleanly setup a two node cluster (that are not on the same LAN) with 3.1. Without many tries and errors.

Regards,
Laurent.
 
two_node="1" expected_votes="1" is not the best idea.

All Proxmox VE cluster operations are quorum based and with this setting you just deacivate all this.
 
The difference for me is that I added: two_node="1" expected_votes="1" options, as I want only a two node cluster and avoid quorum issues.

This is extremely dangerous and will lead to data loss. We always use 3 nodes here.
 
Thanks for your answers. I understand that I'm may be not on the good way...
So what would you recommend me to do in my case?
I only want/have two proxmox servers. What I want is a uniq interface to manage my servers,
to be able to easily manage/migrate VM/CT, crossed VM/CT backups so that I can restore VM/CT in case of a server loss.
I don't want one node to be "locked" in case of failure of the other. I don't care for now about HA.
My to servers are not on the same LAN so I cannot use multicast. I had a look on the "multicast over openvpn" tutorial
but it seems to me that it adds unnecessary complexity.
 
I have only two nodes in cluster, not HA. I did not put expected to 1.
If it happens that one of the nodes is down or in any way unreachable from the other, cluster loses quorum, and the whole cluster gets locked (so, even the only remaining node).
I can temporarily unlock the "up node" by CLI (on that node) issuing "#pvecm e 1".
When the "down node" comes back in the cluster, the quorum is back, and the "expected" falls automatically back to 2.

That said, as I see it, you can work with two nodes only. If you don't want the above behaviour, you need a third node.
If you work with HA and expect VM/CT to be automatically restarted in a safe and consistent way, you badly need a working fenceing and a third node, and leave expected as it is, or bad troubles will come.

Marco
 
Hi,

Thanks again for your answers.
I read more about Quorum issues.
So if I understood well the right way to go is to add transport="udpu" to cluster.conf and use pvecm e 1 only when needed.
I tried to setup cluster (in VirtualBoxes) to test it but when I add the second node to the cluster I get the error:
Waiting for quorum... Timed-out waiting for cluster [FAILED]
I just created the cluser with:
pvecm create clustertest
modified the cluster.conf with udpu option and tried to add the second node with:
pvecm add <IP of server1>
About the "two node" option. Must I add it to make it work? What does it do exactly?