Can't Migrate KVM VPS

yena

Renowned Member
Nov 18, 2011
378
5
83
Hello,
i have this error migrating a VPS:

task started by HA resource agent
2018-05-15 12:36:57 starting migration of VM 305 to node 'cvs1' (10.10.10.1)
2018-05-15 12:36:57 found local disk 'KVM:vm-305-disk-1' (in current VM config)
2018-05-15 12:36:57 found local disk 'local:iso/debian-9.3.0-amd64-netinst.iso' (referenced by snapshot(s))
2018-05-15 12:36:57 can't migrate local disk 'local:iso/debian-9.3.0-amd64-netinst.iso': local cdrom image
2018-05-15 12:36:57 ERROR: Failed to sync data - can't migrate VM - check log
2018-05-15 12:36:57 aborting phase 1 - cleanup resources
2018-05-15 12:36:57 ERROR: migration aborted (duration 00:00:00): Failed to sync data - can't migrate VM - check log
TASK ERROR: migration aborted

I Have removed CD rom in the hardware options selecting none .. and i have tried to copy the iso on the target machine .. but still the same error.

4 Node cluster on ZFS + Replication

Thanks!


pveversion -V
proxmox-ve: 5.1-36 (running kernel: 4.13.13-5-pve)
pve-manager: 5.1-43 (running version: 5.1-43/bdb08029)
pve-kernel-4.15.10-1-pve: 4.15.10-4
pve-kernel-4.10.15-1-pve: 4.10.15-15
pve-kernel-4.13.13-5-pve: 4.13.13-38
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-20
qemu-server: 5.0-22
pve-firmware: 2.0-3
libpve-common-perl: 5.0-28
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-8
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-16
pve-qemu-kvm: 2.9.1-9
pve-container: 2.0-19
pve-firewall: 3.0-5
pve-ha-manager: 2.0-5
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-3
lxcfs: 2.0.8-2
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.6-pve1~bpo9
 
Last edited:
Hi,
2018-05-15 12:36:57 found local disk 'local:iso/debian-9.3.0-amd64-netinst.iso' (referenced by snapshot(s))
You have to use the parameter "--with-local-disks 1" on the command line.
 
Code:
qm migrate <VMID> <Target node> --online (1|0) --with-local-disks 1
 
You should use a"shared" storage instead if you want to migrate VMs.

Can i set "--with-local-disks 1" globally, to use the web interface migrate button ?
Using ZFS i can't set my storage as "shared" true ?

Thanks
 
Code:
qm migrate <VMID> <Target node> --online (1|0) --with-local-disks 1

Still not working:

root@cvs4:~# qm migrate 305 cvs1 --online 0 --with-local-disks 1
2018-05-15 15:31:52 starting migration of VM 305 to node 'cvs1' (10.10.10.1)
2018-05-15 15:31:52 found local disk 'KVM:vm-305-disk-1' (in current VM config)
2018-05-15 15:31:52 found local disk 'local:iso/debian-9.3.0-amd64-netinst.iso' (referenced by snapshot(s))
2018-05-15 15:31:52 can't migrate local disk 'local:iso/debian-9.3.0-amd64-netinst.iso': local cdrom image
2018-05-15 15:31:52 ERROR: Failed to sync data - can't migrate VM - check log
2018-05-15 15:31:52 aborting phase 1 - cleanup resources
2018-05-15 15:31:52 ERROR: migration aborted (duration 00:00:00): Failed to sync data - can't migrate VM - check log
migration aborted
 
Can i set "--with-local-disks 1" globally, to use the web interface migrate button ?
No, and it breaks your replication if you make it online.

Still not working:
Yes because you have replicated volume on the target node.
You can only do an offline migration.
 
No, and it breaks your replication if you make it online.


Yes because you have replicated volume on the target node.
You can only do an offline migration.

Hav i to delete replication ?
Deleting replication the migration will be very very slow for large disk !!
Is this the only way ?
If i delete all snapshot ? ( whit the "local" entry of the iso )
 
i have solved doing this:
i have replaced every old snapshot local entry in my conf file with the actual (none cdrom).

and after this works weel:

()
2018-05-15 16:16:34 starting migration of VM 305 to node 'cvs1' (10.10.10.1)
2018-05-15 16:16:34 found local disk 'KVM:vm-305-disk-1' (in current VM config)
2018-05-15 16:16:34 copying disk images
2018-05-15 16:16:34 start replication job
2018-05-15 16:16:34 guest => VM 305, running => 0
2018-05-15 16:16:34 volumes => KVM:vm-305-disk-1
2018-05-15 16:16:35 create snapshot '__replicate_305-1_1526393794__' on KVM:vm-305-disk-1
2018-05-15 16:16:35 incremental sync 'KVM:vm-305-disk-1' (__replicate_305-1_1526393705__ => __replicate_305-1_1526393794__)
2018-05-15 16:16:36 send from @__replicate_305-1_1526393705__ to STORAGE/VM/KVM/vm-305-disk-1@__replicate_305-1_1526393794__ estimated size is 2.76M
2018-05-15 16:16:36 total estimated size is 2.76M
2018-05-15 16:16:36 STORAGE/VM/KVM/vm-305-disk-1@__replicate_305-1_1526393705__ name STORAGE/VM/KVM/vm-305-disk-1@__replicate_305-1_1526393705__ -
2018-05-15 16:16:36 TIME SENT SNAPSHOT
2018-05-15 16:16:37 delete previous replication snapshot '__replicate_305-1_1526393705__' on KVM:vm-305-disk-1
2018-05-15 16:16:38 (remote_finalize_local_job) delete stale replication snapshot '__replicate_305-1_1526393705__' on KVM:vm-305-disk-1
2018-05-15 16:16:38 end replication job
2018-05-15 16:16:38 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=cvs1' root@10.10.10.1 pvesr set-state 305 \''{"local/cvs4":{"storeid_list":["KVM"],"last_node":"cvs4","fail_count":0,"last_sync":1526393794,"duration":3.282302,"last_try":1526393794,"last_iteration":1526393794}}'\'
2018-05-15 16:16:39 migration finished successfully (duration 00:00:05)
TASK OK
----------------------------------------------------------------------------

Is it ok ?
 
Yes, this is ok.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!