PVE 3.1 & Cluster with fencing

gigi

New Member
Aug 24, 2013
1
0
1
Hi,

I have installed two servers with proxmox pve 3.1 and have configurate them with HA disponibility with a fence device (APC).
All works fine, the servers can communicate with fence device...

But, because there is a but lol, when some VM are in HA mode and a server crash, the second server reboot the crashed server. OK
When the crashed server reboot and start, he reboot the other server ^o), and take the VMs.
When the rebooted server start, he reboot the other server.
And they constantly restart.

My configuration of cluster :
Code:
<?xml version="1.0"?><cluster config_version="155" name="server-clust1">
  <cman expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey" two_node="1"/>
  <fencedevices>
    <fencedevice agent="fence_apc" ipaddr="x.x.x.x" login="xxxx" name="apc" passwd="xxxx"/>
  </fencedevices>
  <clusternodes>
    <clusternode name="server-1" nodeid="1" votes="1">
      <fence>
        <method name="power">
          <device name="apc" port="1" secure="on"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="server-2" nodeid="2" votes="1">
      <fence>
        <method name="power">
          <device name="apc" port="2" secure="on"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <rm/>
</cluster>

Thanks for your help !
Gigi
 
Yes, that is the behaviour when you configure two_node="1".

I would never use such setup (use 3 nodes, or 2 nodes with qdisk),
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!