migration aborted

Jan 30, 2018
25
2
43
61
Stuttagrt, Germany
Hello,
I can not get on with my problem with the aborted migration. So far, I have not found a solution in the forum.
Does anybody have the same Problem?

migration output:
Code:
2018-05-23 09:41:03 starting migration of VM 160 to node 'kvm6' (10.0.0.2)

2018-05-23 09:41:03 found local disk 'backup:160/vm-160-disk-1.raw' (via storage)
2018-05-23 09:41:03 found local disk 'backup:160/vm-160-disk-2.raw' (via storage)
2018-05-23 09:41:03 found local disk 'local:160/vm-160-disk-1.raw' (via storage)
2018-05-23 09:41:03 found local disk 'local:160/vm-160-disk-2.raw' (via storage)
2018-05-23 09:41:03 found local disk 'templates:160/vm-160-disk-1.raw' (via storage)
2018-05-23 09:41:03 found local disk 'templates:160/vm-160-disk-2.raw' (via storage)
2018-05-23 09:41:03 copying disk images
Formatting '/var/lib/vz/images/160/vm-160-disk-1.raw', fmt=raw size=4294967296
1048576+0 records in
1048576+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 9.30957 s, 461 MB/s
42+261971 records in
42+261971 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 13.4777 s, 319 MB/s
Formatting '/var/lib/vz/images/160/vm-160-disk-2.raw', fmt=raw size=2147483648
524288+0 records in
524288+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 4.68064 s, 459 MB/s
41+130907 records in
41+130907 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 7.05833 s, 304 MB/s
file '/var/lib/vz/images/160/vm-160-disk-2.raw' already exists
command 'dd 'if=/var/lib/vz/images/160/vm-160-disk-2.raw' 'bs=4k'' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2018-05-23 09:41:26 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export templates:160/vm-160-disk-2.raw raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=kvm6' root@10.0.1.2 -- pvesm import templates:160/vm-160-disk-2.raw raw+size - -with-snapshots 0' failed: exit code 255
2018-05-23 09:41:26 aborting phase 1 - cleanup resources
2018-05-23 09:41:26 ERROR: found stale volume copy 'backup:160/vm-160-disk-1.raw' on node 'kvm6'
2018-05-23 09:41:26 ERROR: found stale volume copy 'local:160/vm-160-disk-2.raw' on node 'kvm6'
2018-05-23 09:41:26 ERROR: found stale volume copy 'templates:160/vm-160-disk-2.raw' on node 'kvm6'
2018-05-23 09:41:26 ERROR: migration aborted (duration 00:00:23): Failed to sync data - command 'set -o pipefail && pvesm export templates:160/vm-160-disk-2.raw raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=kvm6' root@10.0.1.2 -- pvesm import templates:160/vm-160-disk-2.raw raw+size - -with-snapshots 0' failed: exit code 255
TASK ERROR: migration aborted


Code:
root:~# /var/lib/vz/images# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)
pve-kernel-4.15: 5.2-1
pve-kernel-4.13: 5.1-44
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.13.16-2-pve: 4.13.16-48
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-31
libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
proxmox-widget-toolkit: 1.0-18
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-4
pve-firewall: 3.0-9
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-5
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-26
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.8-pve1~bpo9


Code:
root:~# cat /etc/pve/storage.cfg
dir: templates
    path /var/lib/vz
    content iso,vztmpl

dir: backup
    path /var/lib/vz
    content backup
    maxfiles 3

lvmthin: local-lvm
    thinpool data
    vgname pve
    content rootdir,images

dir: images-loc
    path /var/lib/vz
    content images
    shared 1

dir: local
    path /var/lib/vz
    content images,vztmpl,rootdir,iso
    maxfiles 0
 
path /var/lib/vz
Your directory storages point all to the same path and have their content type overlapping. You only need one storage definition per path.

2018-05-23 09:41:03 found local disk 'backup:160/vm-160-disk-1.raw' (via storage) 2018-05-23 09:41:03 found local disk 'backup:160/vm-160-disk-2.raw' (via storage)
With your current storage definition, these disks should not exist there, unless they are still in the VM config.

The disks referenced in the migration log are all one and the same disk, that is sure a point where it fails.
 
Hi
Having a similar problem, I have 2 nodes. When I try to migrate from three to four it gives me an error
"storage 'local-lvm' does not exists (500)" . Making changes to the storage.cfg files on ether server don't seem to help. It keeps coming back with the same error. I'm running 5.2.8 on both nodes. Is there any way to tell Proxmox which drive to use on the node you are sending it to?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!