[SOLVED] Disk move problem

Sakis

Active Member
Aug 14, 2013
121
6
38
I am starting lately to move a lot of images from NAS to ceph pools.
I have problem with one image that failled to move.

Move disk log
Code:
create full clone of drive virtio0 (NAS_3:600/vm-600-disk-1.qcow2)
2014-10-22 19:15:41.421050 7fc674087760 -1 did not load config file, using default settings.
transferred: 0 bytes remaining: 273804165120 bytes total: 273804165120 bytes progression: 0.00 %
transferred: 0 bytes remaining: 273804165120 bytes total: 273804165120 bytes progression: 0.00 %
transferred: 0 bytes remaining: 273804165120 bytes total: 273804165120 bytes progression: 0.00 %
transferred: 0 bytes remaining: 273804165120 bytes total: 273804165120 bytes progression: 0.00 %
transferred: 0 bytes remaining: 273804165120 bytes total: 273804165120 bytes progression: 0.00 %
transferred: 0 bytes remaining: 273804165120 bytes total: 273804165120 bytes progression: 0.00 %
transferred: 0 bytes remaining: 273804165120 bytes total: 273804165120 bytes progression: 0.00 %
transferred: 0 bytes remaining: 273804165120 bytes total: 273804165120 bytes progression: 0.00 %
transferred: 0 bytes remaining: 273804165120 bytes total: 273804165120 bytes progression: 0.00 %
transferred: 0 bytes remaining: 273804165120 bytes total: 273804165120 bytes progression: 0.00 %
transferred: 0 bytes remaining: 273804165120 bytes total: 273804165120 bytes progression: 0.00 %
transferred: 22675456 bytes remaining: 273781489664 bytes total: 273804165120 bytes progression: 0.01 %
transferred: 43646976 bytes remaining: 273760518144 bytes total: 273804165120 bytes progression: 0.02 %
...
transferred: 273786798080 bytes remaining: 17367040 bytes total: 273804165120 bytes progression: 99.99 %
transferred: 273804165120 bytes remaining: 0 bytes total: 273804165120 bytes progression: 100.00 %
2014-10-22 23:44:11.676590 7fe1c1be5760 -1 did not load config file, using default settings.
Removing all snapshots: 100% complete...done.
2014-10-22 23:44:11.776254 7fcb54573760 -1 did not load config file, using default settings.
Removing image: 1% complete...
Removing image: 2% complete...
...
Removing image: 99% complete...
Removing image: 100% complete...done.
TASK  ERROR: storage migration failed: mirroring error: VM 600 qmp command  'block-job-complete' failed - The active block job for device  'drive-virtio0' cannot be completed

The result is that the kvm is still running at the old storage. I dont have any img at ceph. Kvm conf is the same

qemu-img info
Code:
file format: qcow2
virtual size: 255G (273804165120 bytes)
disk size: 382G
cluster_size: 65536
Format specific information:
    compat: 0.10

There is definitely smth messed up with an old snapshot that i had from that machine.
Proxmox gui didnt show any snapshots available before move.
Also qemu-img snapshol -l disk, returns nothing.

I would like to avoid shrink and then move, or backup restore cause i will have big downtime.

Before trying to move this kvm stopped as suggested here http://forum.proxmox.com/threads/15825-Move-disk-error i would like to make sure first that is impossible to move it online.

Thanks
 
Last edited:
Hi,

can you do in vm monitor

"info version"

I would like to known which qemu version is running.
(We have already discussed about this but, and we have a workaround , but some fixes has been introduced in last qemu version)
 
# info version
2.1.0

Code:
root@node4:~# pveversion -v
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-31-pve: 2.6.32-132
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

I see pve-qemu-kvm 2-1-9 at updates.
 
Last edited:
I upgraded the package. Stop-Start the vm and then moved disk without clicking delete source. It worked. I then rm the old disk manually.

Thank you