PVE 5.0-32 (Bug) Migrated Disk from local to RBD, failed.

devinacosta

Active Member
Aug 3, 2017
65
11
28
46
I found a bug in the latest 5.0-32 of PVE. I had created VM's local on disk (SSD) drive. I then migrated the disks to CEPH data store and the configuration see's the change, however when I try to do LIVE migration it still thinks there is a reference to the old SSD drive.

The thing I noticed is on the VM that it fails the original disk was left on local disk, however until I removed the local disk image even though the configuration in the cluster see's it pointing to ceph-rbd it wouldn't migrate even if i powered off the device. (Deleting the file from data_ssd:100/*.qcow2, resolved the issue) seems like a bug.

()
2017-09-23 17:48:10 starting migration of VM 100 to node 'pve02' (10.241.147.32)
2017-09-23 17:48:10 found local disk 'data_ssd:100/vm-100-disk-2.qcow2' (via storage)
2017-09-23 17:48:10 copying disk images
cannot import format raw+size into a file of format qcow2
send/receive failed, cleaning up snapshot(s)..
2017-09-23 17:48:10 ERROR: Failed to sync data - command 'set -o pipefail && pvesm export data_ssd:100/vm-100-disk-2.qcow2 raw+size - -with-snapshots 0 | /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=pve02' root@10.241.147.32 -- pvesm import data_ssd:100/vm-100-disk-2.qcow2 raw+size - -with-snapshots 0' failed: exit code 255
2017-09-23 17:48:10 aborting phase 1 - cleanup resources
2017-09-23 17:48:11 ERROR: found stale volume copy 'data_ssd:100/vm-100-disk-2.qcow2' on node 'pve02'
2017-09-23 17:48:11 ERROR: migration aborted (duration 00:00:02): Failed to sync data - command 'set -o pipefail && pvesm export data_ssd:100/vm-100-disk-2.qcow2 raw+size - -with-snapshots 0 | /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=pve02' root@10.241.147.32 -- pvesm import data_ssd:100/vm-100-disk-2.qcow2 raw+size - -with-snapshots 0' failed: exit code 255
TASK ERROR: migration aborted

upload_2017-9-23_17-51-15.png
 
Hi,

please can you send the config file from this VM.

cat /etc/pve/qemu-server/100.conf
 
#oVIRT Manager
bootdisk: virtio0
cores: 2
ide2: none,media=cdrom
memory: 16384
name: ovirt-manager
net0: virtio=5A:4C:9C:12:5E:2B,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=3d12d4e7-ac3d-4175-b83a-81788410ec26
sockets: 2
unused0: local-lvm:vm-100-disk-1
virtio0: nfs:100/vm-100-disk-1.qcow2,size=50G

Appears it still had a reference to the old image for some reason even thought it didn't show on the GUI?
 
Try to flush the cache of your browser to see the unused disk.

You can't migrate because you must delete the unused image or use the command line with option --with-local-disks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!