Migration of template fails on 5.1

slyoldfox

Active Member
Apr 28, 2013
2
0
41
We have a vm-template that was created by choosing "convert to template".
However the migration of this template fails with the following error:

Code:
2018-01-06 13:08:54 found local disk 'local-lvm:base-111-disk-1' (in current VM config)
2018-01-06 13:08:54 copying disk images
illegal name 'base-111-disk-1' - sould be 'vm-111-*'
dd: error writing 'standard output': Connection reset by peer
14+0 records in
13+0 records out
903552 bytes (904 kB, 882 KiB) copied, 0.422384 s, 2.1 MB/s
command 'dd 'if=/dev/pve/base-111-disk-1' 'bs=64k'' failed: exit code 1
exit code 255
send/receive failed, cleaning up snapshot(s)..
2018-01-06 13:08:56 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export local-lvm:base-111-disk-1 raw+size - -with-snapshots 0' failed: exit code 1
2018-01-06 13:08:56 aborting phase 1 - cleanup resources
2018-01-06 13:08:56 ERROR: found stale volume copy 'local-lvm:base-111-disk-1' on node 'fe-prodsup10'
2018-01-06 13:08:56 ERROR: migration aborted (duration 00:00:03): Failed to sync data - command 'set -o pipefail && pvesm export local-lvm:base-111-disk-1 raw+size - -with-snapshots 0' failed: exit code 1
TASK ERROR: migration aborted

Indeed this template's disk was renamed automatically from vm-111-disk-1 to base-111-disk-1 when it converted to a template.
My workaround to migrate this template for now is to do a Full Clone and then migrate the clone, and then convert it to a template again (it will then rename the vm-149-disk-1 to base-149-disk-1 again and make migration impossible).

It this a known bug? We use Proxmox 5.1 on ext4 disks.
Looking at https://git.proxmox.com/?p=pve-storage.git;a=blob;f=PVE/Storage/DRBDPlugin.pm#l174 it also should allow a match on ^base-$vmid- ?

Code:
proxmox-ve: 5.1-32 (running kernel: 4.13.13-2-pve)
pve-manager: 5.1-41 (running version: 5.1-41/0b958203)
pve-kernel-4.13.4-1-pve: 4.13.4-26
pve-kernel-4.13.13-2-pve: 4.13.13-32
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-18
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-5
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9
openvswitch-switch: not correctly installed
 
Hello,
Look like i have the same issue,as
Code:
# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.21-1-pve)
pve-manager: 6.0-7 (running version: 6.0-7/28984024)
pve-kernel-5.0: 6.0-7
pve-kernel-helper: 6.0-7
pve-kernel-4.15: 5.4-8
pve-kernel-5.0.21-1-pve: 5.0.21-2
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-10-pve: 4.15.18-32
ceph-fuse: 14.2.2-pve1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.11-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-4
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-8
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-64
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-7
pve-cluster: 6.0-7
pve-container: 3.0-7
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.0-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-7
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve2
when try migrate get:
Code:
2019-09-30 12:10:00 starting migration of VM 115 to node 'khd3' (192.168.89.7)
2019-09-30 12:10:00 found local disk 'local-lvm:base-115-disk-0' (in current VM config)
2019-09-30 12:10:00 copying disk images
illegal name 'base-115-disk-0' - sould be 'vm-115-*'
command 'dd 'if=/dev/pve/base-115-disk-0' 'bs=64k'' failed: got signal 13
send/receive failed, cleaning up snapshot(s)..
2019-09-30 12:10:01 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export local-lvm:base-115-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=khd3' root@192.168.89.7 -- pvesm import local-lvm:base-115-disk-0 raw+size - -with-snapshots 0' failed: exit code 255
2019-09-30 12:10:01 aborting phase 1 - cleanup resources
2019-09-30 12:10:01 ERROR: found stale volume copy 'local-lvm:base-115-disk-0' on node 'khd3'
2019-09-30 12:10:01 ERROR: migration aborted (duration 00:00:02): Failed to sync data - command 'set -o pipefail && pvesm export local-lvm:base-115-disk-0 raw+size - -with-snapshots 0 | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=khd3' root@192.168.89.7 -- pvesm import local-lvm:base-115-disk-0 raw+size - -with-snapshots 0' failed: exit code 255
TASK ERROR: migration aborted
Best Regards
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!