Problem with migration only some VM's

Kamyk

New Member
Oct 16, 2013
18
0
1
Hello

We try to copy VM from one Server to other Server. Some hasn't any problem but some has two different problem:

1. Problem with owner

Executing HA migrate for VM 111 to node kyle
Trying to migrate pvevm:111 to kyle...Failed; service running on original owner
TASK ERROR: command 'clusvcadm -M pvevm:111 -m kyle' failed: exit code 239

2. Second problem with ich9

This is problem with SATA, somebody used SATA instead IDE - can i change it now? Or i have to reinstall VM?

Sep 16 11:39:27 starting migration of VM 110 to node 'kyle' (10.10.1.64)
Sep 16 11:39:27 copying disk images
Sep 16 11:39:27 starting VM 110 on remote node 'kyle'
Sep 16 11:39:28 starting ssh migration tunnel
Sep 16 11:39:29 starting online/live migration on localhost:60000
Sep 16 11:39:29 migrate_set_speed: 8589934592
Sep 16 11:39:29 migrate_set_downtime: 0.1
Sep 16 11:39:29 migrate uri => tcp:localhost:60000 failed: VM 110 qmp command 'migrate' failed - State blocked by non-migratable device '0000:00:07.0/ich9_ahci'
Sep 16 11:39:31 ERROR: online migrate failure - VM 110 qmp command 'migrate' failed - State blocked by non-migratable device '0000:00:07.0/ich9_ahci'
Sep 16 11:39:31 aborting phase 2 - cleanup resources
Sep 16 11:39:31 migrate_cancel
Sep 16 11:39:32 ERROR: migration finished with problems (duration 00:00:06)
TASK ERROR: migration problems

My cliuster.conf looks like that:
<?xml version="1.0"?><cluster config_version="45" name="cluster01">
<cman expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey" two_node="1"/>
<fencedevices>
<fencedevice agent="fence_ipmilan" ipaddr="10.10.1.63" lanplus="1" login="fencing" name="vh01-ipmi" passwd="proxmox" power_wait="5"/>
<fencedevice agent="fence_ipmilan" ipaddr="10.10.1.64" lanplus="1" login="fencing" name="vh02-ipmi" passwd="proxmox" power_wait="5"/>
</fencedevices>
<clusternodes>
<clusternode name="cartman" nodeid="1" votes="1">
<fence>
<method name="1">
<device name="vh01-ipmi"/>
</method>
</fence>
</clusternode>
<clusternode name="kyle" nodeid="2" votes="1">
<fence>
<method name="1">
<device name="vh02-ipmi"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<pvevm autostart="1" vmid="100"/>
<pvevm autostart="1" vmid="109"/>
<pvevm autostart="1" vmid="101"/>
<pvevm autostart="1" vmid="102"/>
<pvevm autostart="1" vmid="103"/>
<pvevm autostart="1" vmid="104"/>
<pvevm autostart="1" vmid="105"/>
<pvevm autostart="1" vmid="107"/>
<pvevm autostart="1" vmid="108"/>
<pvevm autostart="1" vmid="111"/>
</rm>
</cluster>

Could you help me and tell why we have that two problems? And why it happened on the same server. And some VM are good and i can move between two nodes.

Thank you for help and answer.

Best,
Rafal - Kamyk
 
Last edited:
I have ran into migration issues whenever there was a backup or snapshot attached to the VM.. Some VM would migrate other not.. various errors ,this is what I found in my case. Once removed it migrated ok..

YMMV
 
I don't have any backup or snapshot yet :( And still i can't move because is owner problem. But when i clone one VM to other i can move it after :(

Strange.
 
The VM conf file might have the info, maybe the VM disk is on a local share and not a network share. That would contain the path where the disk image is. done that by accident when in a hurry.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!