Fresh VE 2.0 HA install, normal Migration Works, HA Migration Fails

Hi All,

Having a couple of problems with online migration of HA VMs. I have a three node cluster, using DRBD in a primary/primary ->stacked secondary configuration. The cluster seems to be working properly, and migration of normal VMs works properly between the two primary DRBD nodes.

Unfortunately, online migration of HA VMs fails with the useless error message, "Temporary Failure". I increased the log level of rgmanager to 7, and everytime I attempt a migration I receive the following message in my /var/log/message

rgmanager[1694]: BUG! Attempt to forward to myself!

This occurs on the cluster that is receiving the VM.

Fencing, etc. . . all look correct. I'm happy to post my configuration files if it helps. Please see the output of my cluster.conf file below.

Thanks, I appreciate any help anyone can provide! Usernames and password removed from the CONF file.

Setup description:

jupiter-1 (secondary DRBD disaster node) IP: 192.168.11.101 IPMI IP: 192.168.11.32
jupiter-2 (primary DRBD node) IP: 192.168.11.102 IPMI IP: 192.168.11.56
jupiter-3 (priamry DRBD node) IP: 192.168.11.103 IPMI IP: 192.168.11.93



Sherwin Amiran

Code:
<?xml version="1.0"?>
<cluster config_version="15" name="at-cluster-1">
  <cman keyfile="/var/lib/pve-cluster/corosync.authkey"/>
  <fencedevices>
    <fencedevice agent="fence_ipmilan" ipaddr="192.168.11.32" lanplus="1" login="XXXXX" name="ipmi1" passwd="XXXXXX" power_wait="5"/>
    <fencedevice agent="fence_ipmilan" ipaddr="192.168.11.56" lanplus="1" login="XXXXX" name="ipmi2" passwd="XXXXXX" power_wait="5"/>
    <fencedevice agent="fence_ipmilan" ipaddr="192.168.11.93" lanplus="1" login="XXXXX" name="ipmi3" passwd="XXXXXX" power_wait="5"/>
  </fencedevices>
  <clusternodes>
    <clusternode name="jupiter-1" nodeid="1" votes="1">
      <fence>
        <method name="1">
          <device name="ipmi1"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="jupiter-2" nodeid="2" votes="1">
      <fence>
        <method name="1">
          <device name="ipmi2"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="jupiter-3" nodeid="3" votes="3">
      <fence>
        <method name="1">
          <device name="ipmi3"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <rm log_level="7">
    <service autostart="1" exclusive="0" name="TestIP" recovery="relocate">
      <ip address="192.168.11.104"/>
    </service>
    <pvevm autostart="1" vmid="100"/>
    <pvevm autostart="0" vmid="101"/>
    <pvevm autostart="0" vmid="102"/>
  </rm>
</cluster>
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!