BUG: Enabilin HA on Started Machine

as soon as you mark VM/CT as HA its managed by the HA stack. so this is not really a bug, but we should add this to the documentation.
 
Well, I consider it a bug because it don't check if the vm is running or not. At least a forced stop to the vm will be good. Start a VM twice is equals to damage permanently his filesystem.
 
I do not see how this can happen. A VM can only run once.

Pls describe step by step how to reproduce this.
 
I've created a fresh new install of a 4-node cluster.
I've configured the cluster with fencing with this conf:

Code:
<?xml version="1.0"?>
<cluster config_version="37" name="ClusterFO">
  <cman keyfile="/var/lib/pve-cluster/corosync.authkey"/>
  <fencedevices>
    <fencedevice agent="fence_ipmilan" ipaddr="192.168.X.X" login="ipmiusername" name="fence1" passwd="ipmipassword"/>
    <fencedevice agent="fence_ipmilan" ipaddr="192.168.X.X" login="ipmiusername" name="fence2" passwd="ipmipassword"/>
    <fencedevice agent="fence_ipmilan" ipaddr="192.168.X.X" login="ipmiusername" name="fence3" passwd="ipmipassword"/>
    <fencedevice agent="fence_ipmilan" ipaddr="192.168.X.X" login="ipmiusername" name="fence4" passwd="ipmipassword"/>
  </fencedevices>
  <clusternodes>
    <clusternode name="VM1" nodeid="3" votes="1">
      <fence>
        <method name="1">
          <device name="fence1"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="VM2" nodeid="2" votes="1">
      <fence>
        <method name="1">
          <device name="fence2"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="VM3" nodeid="1" votes="1">
      <fence>
        <method name="1">
          <device name="fence3"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="VM4" nodeid="4" votes="1">
      <fence>
        <method name="1">
          <device name="fence4"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
</cluster>

I've joined every node in the fencing domain and I've manually started rgmanager service on all nodes.

After that i've enabled HA to four VM and then i've activated the new config via Web UI.
The system have started the new vms on the other nodes (random choosing?) and the VM which have started previously disappear from the web UI on the nodes where the VM where running.
BUT
on those nodes i've done a "ps ax" to show which VM are really running and i've found that the old vm where never stopped and started again on other nodes.
In other words the kvm process where still running and was not reported by the web ui.
 
pls file a bug at https://bugzilla.proxmox.com

to prevent this issue (for now), make sure the VM´s are not running when you add them to HA.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!