[SOLVED] openvz storage model and ha

amonra

New Member
Mar 8, 2013
3
0
1
Hello.

We have recently installed proxmox in three node cluster over lvm, my cluster.conf is:

Code:
<?xml version="1.0"?><cluster config_version="10" name="cloud">
  <cman keyfile="/var/lib/pve-cluster/corosync.authkey"/>
  <fencedevices>
    <fencedevice agent="/usr/local/sbin/fencecyc" name="fencecyc"/>
    <fencedevice agent="fence_ipmilan" login="root" name="fenceipmi" passwd="cyc2012"/>
    <fencedevice agent="fence_virt" name="fencekvm"/>
    <fencedevice agent="fence_ilo" login="root" name="fencehp" passwd="cyc2012"/>
  </fencedevices>
  <clusternodes>
    <clusternode name="bastet" nodeid="1" votes="1">
      <fence>
        <method name="1">
          <device ipaddr="172.26.51.223" name="fenceipmi"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="khnum" nodeid="2" votes="1">
      <fence>
        <method name="1">
          <device action="reboot" ipaddr="172.26.51.224" name="fencehp"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="heket" nodeid="3" votes="1">
      <fence>
        <method name="1">
          <device ipaddr="172.26.51.221" name="fenceipmi"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <rm>
    <service autostart="1" domain="openVZ1" exclusive="0" max_restarts="1" name="sharedOpenVZ1" recovery="relocate">
      <script file="/etc/init.d/openvzMount" name="openvzMount"/>
    </service>
    <failoverdomains>
      <failoverdomain name="openVZ1" nofailback="0" ordered="1" restricted="1">
        <failoverdomainnode name="khnum" priority="1"/>
        <failoverdomainnode name="bastet" priority="9"/>
      </failoverdomain>
      <failoverdomain name="sharedKVM" nofailback="1" ordered="1" restricted="1">
        <failoverdomainnode name="heket" priority="1"/>
        <failoverdomainnode name="bastet" priority="2"/>
      </failoverdomain>
    </failoverdomains>
    <pvevm autostart="1" depend="service:sharedOpenVZ1" domain="openVZ1" vmid="101"/>
  </rm>
</cluster>

where the service that mount openvz partition is:
Code:
#! /bin/sh# /etc/init.d/openvzMount
#


# Some things that run always
touch /var/lock/openvzMount
# Carry out specific functions when asked to by the system
case "$1" in
  start)
    echo "Starting openvz Storage"
        mount -o _netdev,nobh,barrier=0 /dev/mapper/proxmox_shared-OpenVZStorage /mnt/OpenVZStorage
        myname=`hostname`
        pvesm set sharedOpenVZ -nodes $myname
    ;;
  stop)
    echo "Stoping openvz Storage"
    #pvesm set sharedOpenVZ -nodes ""
    umount -f /dev/mapper/proxmox_shared-OpenVZStorage
    ;;
  status)
    mountpoint /mnt/OpenVZStorage/
    if [ $? -eq 0 ] ; then
        exit 0
    else
        exit 1
    fi
    ;;
  *)
  echo "Incorrect Parameters"
  exit 1
  ;;
esac
exit 0

When one node fails virtual machine is migrated ok but when it returns and machine must be realocated it fails and node is fenced.
If I stop VM storage migrates OK but VM is not migrated.

Partition proxmox_shared-OpenVZStorage is an ext4 fs on top of shared LVM in fiber channel storage.

Any ideas.

Thanks in advice.
 
Last edited:
I will respond to myself.
I must not use ha for VM. I modify the init script to stop and restart vm when it boots. New configuration is:

cluster.conf
Code:
<?xml version="1.0"?>
<cluster config_version="17" name="cloud">
  <cman keyfile="/var/lib/pve-cluster/corosync.authkey"/>
  <fencedevices>
    <fencedevice agent="/usr/local/sbin/fencecyc" name="fencecyc"/>
    <fencedevice agent="fence_ipmilan" login="root" name="fenceipmi" passwd="cyc2012"/>
    <fencedevice agent="fence_virt" name="fencekvm"/>
    <fencedevice agent="fence_ilo" login="root" name="fencehp" passwd="cyc2012"/>
  </fencedevices>
  <clusternodes>
    <clusternode name="bastet" nodeid="1" votes="1">
      <fence>
        <method name="1">
          <device ipaddr="172.26.51.223" name="fenceipmi"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="khnum" nodeid="2" votes="1">
      <fence>
        <method name="1">
          <device action="reboot" ipaddr="172.26.51.224" name="fencehp"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="heket" nodeid="3" votes="1">
      <fence>
        <method name="1">
          <device ipaddr="172.26.51.221" name="fenceipmi"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <rm>
    <pvevm autostart="1" domain="sharedKVM" vmid="103"/>
    <pvevm autostart="1" domain="sharedKVM" vmid="102"/>
    <failoverdomains>
      <failoverdomain name="openVZ1" nofailback="0" ordered="1" restricted="1">
        <failoverdomainnode name="khnum" priority="1"/>
        <failoverdomainnode name="bastet" priority="9"/>
      </failoverdomain>
      <failoverdomain name="sharedKVM" nofailback="1" ordered="1" restricted="1">
        <failoverdomainnode name="heket" priority="1"/>
        <failoverdomainnode name="bastet" priority="2"/>
      </failoverdomain>
    </failoverdomains>
    <service autostart="1" domain="openVZ1" exclusive="0" max_restarts="1" name="sharedOpenVZ1" recovery="relocate">
      <script file="/etc/init.d/openvzMount" name="openvzMount"/>
    </service>
  </rm>
</cluster>

Code:
#! /bin/sh
# /etc/init.d/openvzMount
#


# Some things that run always
touch /var/lock/openvzMount
# Carry out specific functions when asked to by the system
case "$1" in
  start)
    echo "Starting openvz Storage"
    mount -o _netdev,nobh,barrier=0 /dev/mapper/proxmox_shared-OpenVZStorage /mnt/OpenVZStorage
    myname=`hostname`
    mountpoint /mnt/OpenVZStorage/
    if [ $? -ne 0 ] ; then
        exit 1
    fi
    pvesm set sharedOpenVZ -nodes $myname
    for i in $(ls /mnt/OpenVZStorage/private/)
    do
                 mv /etc/pve/nodes/*/openvz/$i.conf /etc/pve/nodes/$myname/openvz/               
                 pvectl start $i
    done


    ;;
  stop)
    echo "Stoping openvz Storage"
    #pvesm set sharedOpenVZ -nodes ""
    for i in $(ls /mnt/OpenVZStorage/private/)
    do
                pvectl shutdown $i
    done
    #sleep 5
    umount -f /dev/mapper/proxmox_shared-OpenVZStorage
    if [ $? -ne 0 ] ; then
        exit 1
    fi
    ;;
  status)
    mountpoint /mnt/OpenVZStorage/
    if [ $? -eq 0 ] ; then
        exit 0
    else
        exit 1
    fi
    ;;
  *)
  echo "Incorrect Parameters"
  exit 1
  ;;
esac
exit 0

I have to modify Cluster.pm to store pvevm section on top of rm section:
replace in line 1394
Code:
push @{$rmsec->{children}}, $vmref;
by
Code:
unshift @{$rmsec->{children}}, $vmref;

VM recover is very fast.

PD: I cannot find any information about openvz on top a shared lvm. This works for me, maybe someone can update the wiki.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!