[HA] no live migration when stopping rgmanager

toby

Guest
Hi,

I successfully set up a 2 node proxmox 2.0 HA cluster with DRBD/LVM and KVM VMs. Everything including live migration, fencing etc. works well OOTB - congratulations for such a great product! The only problem I have is that rgmanager on shutdown simply seems to stop resources (VMs) and restart them on other nodes. However I would expect integrated live migration functionality for VM resources so that the VMs are not rebooted but live migrated instead. Any idea how to achieve that?

If not, what do you think about writing an init-script (K00move-vms) which - using clusvcadm - moves locally hosted VMs to the other nodes at reboot/halt?

Thanks in advance!

EDIT: my cluster.conf:

Code:
<?xml version="1.0"?>
<cluster config_version="6" name="EDC-STORAGE">
  <cman expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey" two_node="1"/>
  <fencedevices>
    <fencedevice agent="fence_ipmilan" ipaddr="..." auth="password" lanplus="1" login="root" name="fence_esb01" passwd="..." power_wait="5"/>
    <fencedevice agent="fence_ipmilan" ipaddr="..." auth="password" lanplus="1" login="root" name="fence_esb02" passwd="..." power_wait="5"/>
  </fencedevices>
  <clusternodes>
    <clusternode name="esb01" nodeid="1" votes="1">
      <fence>
        <method name="1">
          <device name="fence_esb01"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="esb02" nodeid="2" votes="1">
      <fence>
        <method name="1">
          <device name="fence_esb02"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <rm>
    <pvevm autostart="1" vmid="1000"/>
  </rm>
</cluster>
 
Last edited by a moderator:

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
15,420
856
163
why do you stop rgmanager? manual host reboot? why do you do this?

if a VM is HA enabled, you can live-migrate between hosts. if you want to reboot the host, just migrate VMs to other nodes.
 

toby

Guest
Thanks for the quick response! Yes for manually rebooting a node. Of course I can online-migrate all VMs manually but for the sake of convenience (or in case I forget to move VMs) I'd like this to happen automatically which IMHO should be possible e.g. via rgmanager.
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
15,420
856
163
yes. the idea is a "HA maintenance mode" - live-migration of all HA managed guests to other nodes.

as this is not implemented yet, you need to do this manually. live-migrate all HA enabled VM via GUI.
 

toby

Guest
Ok. Just in case anybody is interested, here's my dirty quick hack for migrating all locally running VMs to the first other available node:

Code:
#!/bin/sh


avail_node=$(clustat | grep Online | grep rgmanager | grep -v Local | cut -d " " -f2 | head -1)
local_node=$(clustat | grep Online | grep rgmanager | grep Local | cut -d " " -f2)


for vm in $(clustat | grep pvevm | sed -e "s/ \+/#/g") ; do
        vmres=$(echo $vm | cut -d "#" -f2)
        vmhost=$(echo $vm | cut -d "#" -f3)
        if [ "$vmhost" = "$local_node" ] ; then
                clusvcadm -M $vmres -m $avail_node
        fi
done
 

jleg

Member
Nov 24, 2009
105
2
18
Ok. Just in case anybody is interested, here's my dirty quick hack for migrating all locally running VMs to the first other available node:

Code:
#!/bin/sh


avail_node=$(clustat | grep Online | grep rgmanager | grep -v Local | cut -d " " -f2 | head -1)
local_node=$(clustat | grep Online | grep rgmanager | grep Local | cut -d " " -f2)


for vm in $(clustat | grep pvevm | sed -e "s/ \+/#/g") ; do
        vmres=$(echo $vm | cut -d "#" -f2)
        vmhost=$(echo $vm | cut -d "#" -f3)
        if [ "$vmhost" = "$local_node" ] ; then
                clusvcadm -M $vmres -m $avail_node
        fi
done

nice - but be careful when using "failover domains", because the above might try to move a vm to a node outside of the defined failover domain, which could cause troubles...
(e.g. we have a quorum-only node without kvm module)

another question would be - how to revert the "maintenance mode", means - how do get back the vms onto the originally node?
(ok, manually - so at least it saves 50% of manual efforts.. :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!