offline migration fails if vm disks name start with base-

FrancisVC

Member
Feb 8, 2019
13
0
21
53
Hi,
I´ve monted a cluster with 2 nodes to migrate to V.6.0
I´m migrating offline all my qemu and lxc from node 1 to node 2, but I´m having troubles in some templates that previosly had snapshots.

Code:
2019-10-30 13:06:54 starting migration of VM 102 to node 'proxmox-hp' (192.168.8.15)
2019-10-30 13:06:54 found local disk 'local-raid0:base-102-disk-0' (in current VM config)
2019-10-30 13:06:54 copying disk images
illegal name 'base-102-disk-0' - sould be 'vm-102-*'
command 'dd 'if=/dev/raid0/base-102-disk-0' 'bs=64k'' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2019-10-30 13:06:55 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export local-raid0:base-102-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxmox-hp' root@192.168.8.15 -- pvesm import local-raid0:base-102-disk-0 raw+size - -with-snapshots 0' failed: exit code 255
2019-10-30 13:06:55 aborting phase 1 - cleanup resources
2019-10-30 13:06:55 ERROR: found stale volume copy 'local-raid0:base-102-disk-0' on node 'proxmox-hp'
2019-10-30 13:06:55 ERROR: migration aborted (duration 00:00:02): Failed to sync data - command 'set -o pipefail && pvesm export local-raid0:base-102-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxmox-hp' root@192.168.8.15 -- pvesm import local-raid0:base-102-disk-0 raw+size - -with-snapshots 0' failed: exit code 255
TASK ERROR: migration aborted

If I full clone the vm, again create disks with names starting with base-XXXX
Offline migration of Vm´s with disk names starting with vm-XXXX is not a problem
Is there a way to migrate those vm´s?
I´m using proxmox V.5.4-13 in both nodes.

Another thing, why cannot migrate vm´s with snapshots?

Code:
2019-10-30 13:23:05 starting migration of VM 118 to node 'proxmox-hp' (192.168.8.15)
2019-10-30 13:23:05 found local disk 'local-raid0:vm-118-disk-0' (in current VM config)
2019-10-30 13:23:05 found local disk 'local-raid0:vm-118-state-ParaPruebasAndared' (in current VM config)
2019-10-30 13:23:05 found local disk 'local:iso/virtio-win-0.1.141.iso' (referenced by snapshot(s))
2019-10-30 13:23:05 can't migrate local disk 'local-raid0:vm-118-disk-0': non-migratable snapshot exists
2019-10-30 13:23:05 can't migrate local disk 'local:iso/virtio-win-0.1.141.iso': local cdrom image
2019-10-30 13:23:05 ERROR: Failed to sync data - can't migrate VM - check log
2019-10-30 13:23:05 aborting phase 1 - cleanup resources
2019-10-30 13:23:05 ERROR: migration aborted (duration 00:00:00): Failed to sync data - can't migrate VM - check log
TASK ERROR: migration aborted

The error is: non-migratable snapshot exists,
Thks.
 
Last edited:
Hi,
the first error is because we use base-<VMID>-* disk names for disks of VM templates, linked clones then reference those. For non-template VMs the disk names are required to start with vm-<VMID>-*.
In the second part you have two errors. One is the CD image. You need to unplug it from the VM before you can migrate.
The other one is because of snapshots. We currently support migrating wit local disks with snapshots when the underlying storage is a zfs or the image format is qcow2. Other kinds of snapshots need to be deleted before you can migrate.
 
Hi,
the first error is because we use base-<VMID>-* disk names for disks of VM templates, linked clones then reference those. For non-template VMs the disk names are required to start with vm-<VMID>-*.
In the second part you have two errors. One is the CD image. You need to unplug it from the VM before you can migrate.
The other one is because of snapshots. We currently support migrating wit local disks with snapshots when the underlying storage is a zfs or the image format is qcow2. Other kinds of snapshots need to be deleted before you can migrate.
ok, in the second part I can manage to avoid those errors, but what can I do to avoid the first error? I know that I can avoid it by cloning the template, migrate and convert back to template, and do it again to migrate to node 1 after migration to V.6. Another way would be a backup / restore of the template, but I would like to know if a more direct way is possible, I don´t know why the name of the disks affects to the migration process.
 
Seems like this was fixed in a later version. In 5.4 I get this error as well when I try to migrate a template. In 6.0 it works.
 
Turns out that this is still a problem: bug report. It does work when the underlying storages are ZFS, which was the case when I tested migrating a template on 6.0.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!