[HELP] Not work HA with Proxmox 3.2 with 3 nodes in CLUSTER

Isra Martinez

New Member
Aug 16, 2014
7
0
1
Hi guys, i'm new user of forum.

I need help for solved my problem with HA in Proxmox 3.2 VE. I have 3 proxmox nodes, all in a cluster and migration VM working perfectly.
Now I want try HA, I do step by step of wiki proxmox and others forums but not working never. If I stop service "RGManager" the VM moving and migrate other nodes good, but if I halt the node 1 for example, not moving and migrate the VM until I start the node again.

This is my cluster_conf:

<cluster config_version="23" name="CLUSTER-LAB">
<cman expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<fencedevices>
<fencedevice agent="fence_ilo" ipaddr="192.168.150.1" login="root" name="fencea" passwd="passproxmox"/>
<fencedevice agent="fence_ilo" ipaddr="192.168.150.2" login="root" name="fenceb" passwd="passproxmox"/>
<fencedevice agent="fence_ilo" ipaddr="192.168.150.3" login="root" name="fencec" passwd="passproxmox"/>
</fencedevices>
<clusternodes>
<clusternode name="proxmox-lab" nodeid="1" votes="1">
<fence>
<method name="1">
<device action="reboot" name="fencea"/>
</method>
</fence>
</clusternode>
<clusternode name="proxmox2-lab" nodeid="2" votes="1">
<fence>
<method name="1">
<device action="reboot" name="fenceb"/>
</method>
</fence>
</clusternode>
<clusternode name="proxmox3-lab" nodeid="3" votes="1">
<fence>
<method name="1">
<device action="reboot" name="fencec"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm/>
</cluster>

And fence_tool ls:

lsfence domain
member count 3
victim count 0
victim now 0
master nodeid 1
wait state none
members 1 2 3


I wait your comments please.
Sorry my English bad.
Regards to alls.
 
This line: <cman expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey"/>
Should be: <cman expected_votes="3" keyfile="/var/lib/pve-cluster/corosync.authkey"/>

Remember to increment config_version whenever you make changes to the file.
 
Thanks but not work...

The syslog node:

Aug 16 12:46:33 proxmox-lab fenced[7966]: fence proxmox2-lab dev 0.0 agent fence_ilo result: error from agent
Aug 16 12:46:33 proxmox-lab fenced[7966]: fence proxmox2-lab failed
Aug 16 12:46:36 proxmox-lab fenced[7966]: fencing node proxmox2-lab
Aug 16 12:46:37 proxmox-lab fence_ilo: Parse error: Ignoring unknown option 'nodename=proxmox2-lab
Aug 16 12:46:38 proxmox-lab fence_ilo: Unable to connect/login to fencing device

And others problem when i restart cman...:

root@proxmox-lab:~# service cman restartStopping cluster:
Leaving fence domain... found dlm lockspace /sys/kernel/dlm/rgmanager
fence_tool: cannot leave due to active systems
[FAILED]


Regards and thanks.
 
two things:
1) Login fails to ILO
2) Are the node names proxmox-lab proxmox2-lab proxmox3-lab defined in the file /etc/hosts on every node
 
Try to ssh to the ilo port or hit the GUI of the ilo port with the username and password you are specifying. Once you get that working you should be good to go.
 
Ok but how I try it?

I am trying https://proxmox-lab-ilo/ but not work..

Thanks.

The ilo port is independent of proxmox. You need to configure the iLO from the servers bios. It needs a IP and the physical port on the back of the server has to be plugged into the network. This really has nothing to do with proxmox. You should start with HP's documentation on how to setup a iLO port.
 
Then I think that my problem is that I have the nodes virtualizated with VMware Workstation... And I check HA shutdown virtual machine of VMware W.

Or not?

Regards.
 
Then I think that my problem is that I have the nodes virtualizated with VMware Workstation... And I check HA shutdown virtual machine of VMware W.

Or not?

Regards.

Wait let me get this straight. You are trying to setup a proxmox cluster on top of vmware workstation VM's? Sounds like a recipe for disaster. You need a reliable fence device, unsure on the options for what you are trying to do. I don't think what you are doing is best practice, even in a lab environment. Good luck!
 
Are are a couple of different vmware guest fence agents (dunno if they are available in debian or not...)