Hello
We try to copy VM from one Server to other Server. Some hasn't any problem but some has two different problem:
1. Problem with owner
Executing HA migrate for VM 111 to node kyle
Trying to migrate pvevm:111 to kyle...Failed; service running on original owner
TASK ERROR: command 'clusvcadm -M pvevm:111 -m kyle' failed: exit code 239
2. Second problem with ich9
This is problem with SATA, somebody used SATA instead IDE - can i change it now? Or i have to reinstall VM?
Sep 16 11:39:27 starting migration of VM 110 to node 'kyle' (10.10.1.64)
Sep 16 11:39:27 copying disk images
Sep 16 11:39:27 starting VM 110 on remote node 'kyle'
Sep 16 11:39:28 starting ssh migration tunnel
Sep 16 11:39:29 starting online/live migration on localhost:60000
Sep 16 11:39:29 migrate_set_speed: 8589934592
Sep 16 11:39:29 migrate_set_downtime: 0.1
Sep 16 11:39:29 migrate uri => tcp:localhost:60000 failed: VM 110 qmp command 'migrate' failed - State blocked by non-migratable device '0000:00:07.0/ich9_ahci'
Sep 16 11:39:31 ERROR: online migrate failure - VM 110 qmp command 'migrate' failed - State blocked by non-migratable device '0000:00:07.0/ich9_ahci'
Sep 16 11:39:31 aborting phase 2 - cleanup resources
Sep 16 11:39:31 migrate_cancel
Sep 16 11:39:32 ERROR: migration finished with problems (duration 00:00:06)
TASK ERROR: migration problems
My cliuster.conf looks like that:
Could you help me and tell why we have that two problems? And why it happened on the same server. And some VM are good and i can move between two nodes.
Thank you for help and answer.
Best,
Rafal - Kamyk
We try to copy VM from one Server to other Server. Some hasn't any problem but some has two different problem:
1. Problem with owner
Executing HA migrate for VM 111 to node kyle
Trying to migrate pvevm:111 to kyle...Failed; service running on original owner
TASK ERROR: command 'clusvcadm -M pvevm:111 -m kyle' failed: exit code 239
2. Second problem with ich9
This is problem with SATA, somebody used SATA instead IDE - can i change it now? Or i have to reinstall VM?
Sep 16 11:39:27 starting migration of VM 110 to node 'kyle' (10.10.1.64)
Sep 16 11:39:27 copying disk images
Sep 16 11:39:27 starting VM 110 on remote node 'kyle'
Sep 16 11:39:28 starting ssh migration tunnel
Sep 16 11:39:29 starting online/live migration on localhost:60000
Sep 16 11:39:29 migrate_set_speed: 8589934592
Sep 16 11:39:29 migrate_set_downtime: 0.1
Sep 16 11:39:29 migrate uri => tcp:localhost:60000 failed: VM 110 qmp command 'migrate' failed - State blocked by non-migratable device '0000:00:07.0/ich9_ahci'
Sep 16 11:39:31 ERROR: online migrate failure - VM 110 qmp command 'migrate' failed - State blocked by non-migratable device '0000:00:07.0/ich9_ahci'
Sep 16 11:39:31 aborting phase 2 - cleanup resources
Sep 16 11:39:31 migrate_cancel
Sep 16 11:39:32 ERROR: migration finished with problems (duration 00:00:06)
TASK ERROR: migration problems
My cliuster.conf looks like that:
<?xml version="1.0"?><cluster config_version="45" name="cluster01">
<cman expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey" two_node="1"/>
<fencedevices>
<fencedevice agent="fence_ipmilan" ipaddr="10.10.1.63" lanplus="1" login="fencing" name="vh01-ipmi" passwd="proxmox" power_wait="5"/>
<fencedevice agent="fence_ipmilan" ipaddr="10.10.1.64" lanplus="1" login="fencing" name="vh02-ipmi" passwd="proxmox" power_wait="5"/>
</fencedevices>
<clusternodes>
<clusternode name="cartman" nodeid="1" votes="1">
<fence>
<method name="1">
<device name="vh01-ipmi"/>
</method>
</fence>
</clusternode>
<clusternode name="kyle" nodeid="2" votes="1">
<fence>
<method name="1">
<device name="vh02-ipmi"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<pvevm autostart="1" vmid="100"/>
<pvevm autostart="1" vmid="109"/>
<pvevm autostart="1" vmid="101"/>
<pvevm autostart="1" vmid="102"/>
<pvevm autostart="1" vmid="103"/>
<pvevm autostart="1" vmid="104"/>
<pvevm autostart="1" vmid="105"/>
<pvevm autostart="1" vmid="107"/>
<pvevm autostart="1" vmid="108"/>
<pvevm autostart="1" vmid="111"/>
</rm>
</cluster>
Could you help me and tell why we have that two problems? And why it happened on the same server. And some VM are good and i can move between two nodes.
Thank you for help and answer.
Best,
Rafal - Kamyk
Last edited: