Hello.
We have recently installed proxmox in three node cluster over lvm, my cluster.conf is:
where the service that mount openvz partition is:
When one node fails virtual machine is migrated ok but when it returns and machine must be realocated it fails and node is fenced.
If I stop VM storage migrates OK but VM is not migrated.
Partition proxmox_shared-OpenVZStorage is an ext4 fs on top of shared LVM in fiber channel storage.
Any ideas.
Thanks in advice.
We have recently installed proxmox in three node cluster over lvm, my cluster.conf is:
Code:
<?xml version="1.0"?><cluster config_version="10" name="cloud">
<cman keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<fencedevices>
<fencedevice agent="/usr/local/sbin/fencecyc" name="fencecyc"/>
<fencedevice agent="fence_ipmilan" login="root" name="fenceipmi" passwd="cyc2012"/>
<fencedevice agent="fence_virt" name="fencekvm"/>
<fencedevice agent="fence_ilo" login="root" name="fencehp" passwd="cyc2012"/>
</fencedevices>
<clusternodes>
<clusternode name="bastet" nodeid="1" votes="1">
<fence>
<method name="1">
<device ipaddr="172.26.51.223" name="fenceipmi"/>
</method>
</fence>
</clusternode>
<clusternode name="khnum" nodeid="2" votes="1">
<fence>
<method name="1">
<device action="reboot" ipaddr="172.26.51.224" name="fencehp"/>
</method>
</fence>
</clusternode>
<clusternode name="heket" nodeid="3" votes="1">
<fence>
<method name="1">
<device ipaddr="172.26.51.221" name="fenceipmi"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<service autostart="1" domain="openVZ1" exclusive="0" max_restarts="1" name="sharedOpenVZ1" recovery="relocate">
<script file="/etc/init.d/openvzMount" name="openvzMount"/>
</service>
<failoverdomains>
<failoverdomain name="openVZ1" nofailback="0" ordered="1" restricted="1">
<failoverdomainnode name="khnum" priority="1"/>
<failoverdomainnode name="bastet" priority="9"/>
</failoverdomain>
<failoverdomain name="sharedKVM" nofailback="1" ordered="1" restricted="1">
<failoverdomainnode name="heket" priority="1"/>
<failoverdomainnode name="bastet" priority="2"/>
</failoverdomain>
</failoverdomains>
<pvevm autostart="1" depend="service:sharedOpenVZ1" domain="openVZ1" vmid="101"/>
</rm>
</cluster>
where the service that mount openvz partition is:
Code:
#! /bin/sh# /etc/init.d/openvzMount
#
# Some things that run always
touch /var/lock/openvzMount
# Carry out specific functions when asked to by the system
case "$1" in
start)
echo "Starting openvz Storage"
mount -o _netdev,nobh,barrier=0 /dev/mapper/proxmox_shared-OpenVZStorage /mnt/OpenVZStorage
myname=`hostname`
pvesm set sharedOpenVZ -nodes $myname
;;
stop)
echo "Stoping openvz Storage"
#pvesm set sharedOpenVZ -nodes ""
umount -f /dev/mapper/proxmox_shared-OpenVZStorage
;;
status)
mountpoint /mnt/OpenVZStorage/
if [ $? -eq 0 ] ; then
exit 0
else
exit 1
fi
;;
*)
echo "Incorrect Parameters"
exit 1
;;
esac
exit 0
When one node fails virtual machine is migrated ok but when it returns and machine must be realocated it fails and node is fenced.
If I stop VM storage migrates OK but VM is not migrated.
Partition proxmox_shared-OpenVZStorage is an ext4 fs on top of shared LVM in fiber channel storage.
Any ideas.
Thanks in advice.
Last edited: