Enable fencing the cluster loose sync

Gianni

New Member
Aug 25, 2010
14
0
1
Trento, Italy
Hi,
I have a pve cluster 3.2 with two nodes and DRBD as shared storage, run perfect.
I will configure fencing, so:

- uncomment the line: FENCE_JOIN="yes" in /etc/default/redhat-custer-pve on both nodes
- restart cman on both nodes
- join the fence domain, first on the first node then on the second

but when I return on the web interface (on the first node) I see the second red.
The pvecm status on both nodes is normal:

first node:
# pvecm status
Version: 6.2.0
Config Version: 2
Cluster Name: pvecl2
Cluster Id: 6942
Cluster Member: Yes
Cluster Generation: 96
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: lxsrv1
Node ID: 1
Multicast addresses: 239.192.27.57
Node addresses: 192.168.10.252


second node:
# pvecm status
Version: 6.2.0
Config Version: 2
Cluster Name: pvecl2
Cluster Id: 6942
Cluster Member: Yes
Cluster Generation: 96
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: lxsrv2
Node ID: 2
Multicast addresses: 239.192.27.57
Node addresses: 192.168.10.253


Why?
Can I resync the nodes?
Where I can find information for the reason?
I have tryed:
- leave the fence domain on both nodes
- comment the line in redhat-cluster-pve
- restart cman
but nothing to do, the cluster is out of sync....


thank you,
Gianni
 
revert the fencing config and do:

> service cman restart
> service pve-cluster restart
 
revert the fencing config and do:

> service cman restart
> service pve-cluster restart

Yesssss.
Yesterday I have tryed this hint only on node 2 and hasn't solved.
Today I have tried only on node 1 and not on 2.
When I issued the command that you suggest on node 2, everythings went well.

Thank you a lot Tom,
Gianni