Good morning to all.
I'm configuring a cluster with two nodes and a quorum disk and I my fencing is not working.
I'm using fence_ipmilan but I have problems. In each node I configured ipmitool and tested and it's working:
If I launch fence_node for test the fencing I get an error in each node:
This is my cluster.conf:
fence_ipmilan test fails too:
Maybe is some problem with -C 0 -p 623, but I don't know what is this valors:
My cluster status:
Proxmox version on node1 and node2:
¿Somebody can help me?
Best regards.
I'm configuring a cluster with two nodes and a quorum disk and I my fencing is not working.
I'm using fence_ipmilan but I have problems. In each node I configured ipmitool and tested and it's working:
Code:
root@node1:~# ipmitool -I lanplus -H 192.168.150.43 -U user -P pass -v chassis power status
Chassis Power is on.
root@node2:~# ipmitool -I lanplus -H 192.168.150.33 -U user -P pass -v chassis power status
Chassis Power is on
If I launch fence_node for test the fencing I get an error in each node:
Code:
root@node1:~# fence_node node1-vv
fence node1 dev 0.0 agent fence_ipmilan result: error from agent
agent args: action=reboot nodename=gestion1 agent=fence_ipmilan ipaddr=192.168.150.33 login=user passwd=pass power_wait=60
fence node1 failed
root@node2:~# fence_node node2 -vv
fence gestion1 dev 0.0 agent fence_ipmilan result: error from agent
agent args: action=reboot nodename=node2 agent=fence_ipmilan ipaddr=192.168.150.43 login=user passwd=user power_wait=60
fence node2 failed
This is my cluster.conf:
Code:
<?xml version="1.0"?>
<cluster config_version="16" name="clustergestion">
<cman expected_votes="3" keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<quorumd allow_kill="0" interval="1" label="proxmox1_qdisk" tko="10" votes="1"/>
<totem token="54000"/>
<fencedevices>
<fencedevice agent="fence_ipmilan" name="fenceGestion1" ipaddr="192.168.150.33" login="user" passwd="pass" power_wait="60"/>
<fencedevice agent="fence_ipmilan" name="fenceGestion2" ipaddr="192.168.150.43" login="user" passwd="pass" power_wait="60"/>
</fencedevices>
<clusternodes>
<clusternode name="node1" nodeid="1" votes="1">
<fence>
<method name="1">
<device action="reboot" name="fenceGestion1"/>
</method>
</fence>
</clusternode>
<clusternode name="node2" nodeid="2" votes="1">
<fence>
<method name="1">
<device action="reboot" name="fenceGestion2"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<pvevm autostart="1" vmid="100"/>
</rm>
</cluster>
fence_ipmilan test fails too:
Code:
root@node2:~# fence_ipmilan -l user -p pass -a 192.168.150.33 -o reboot -vv
INFO:root:Delay 0 second(s) before logging in to the fence device
INFO:root:Executing: /usr/bin/ipmitool -I lan -H 192.168.150.33 -U user -P pass [B]-C 0 -p 623[/B] -L ADMINISTRATOR chassis power status
DEBUG:root:1 Get Session Challenge command failed
Error: Unable to establish LAN session
Unable to get Chassis Power Status
ERROR:root:Failed: Unable to obtain correct plug status or plug is not available
Maybe is some problem with -C 0 -p 623, but I don't know what is this valors:
Code:
root@node2:~# /usr/bin/ipmitool -I lanplus -H 192.168.150.33 -U user -P user -L ADMINISTRATOR chassis power status
Chassis Power is on
My cluster status:
Code:
root@node2:~# clustat
Cluster Status for clustergestion @ Tue Oct 14 11:15:22 2014Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
node1 1 Online, Local, rgmanager
node2 2 Online, rgmanager
/dev/block/8:33 0 Online, Quorum Disk
Service Name Owner (Last) State
------- ---- ----- ------ -----
pvevm:100 node2 started
Proxmox version on node1 and node2:
Code:
root@gestion2:~# pveversion --verbose
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
¿Somebody can help me?
Best regards.