Storage migration failing

dignus

Renowned Member
Feb 12, 2009
155
11
83
Hi all,

I'm trying to migrate disks between 2 storage endpoints. Both are NFS, but on different storage arrays. Both storage arrays are almost idle. This is the error, after 100% migration flow:

TASK ERROR: storage migration failed: mirroring error: VM 255 qmp command 'query-block-jobs' failed - client closed connection

I've seen this error on the forums in 2014, but not recently. Am I missing something?

ps. Out of 10 trials it worked once.
 
Last edited:
What happens as well is that the VM get's restarted when the storage migration fails. Disabling HA for the VM doesn't help - the VM is stopped when the migration fails.
 
could you post the output of "pveversion -v", the storage configuration ("/etc/pve/storage.cfg") and the config of the failing VM?
 
Fabian,

It's the same issue with multiple VM's, running on multiple physical servers. But in the case of the last VM I tried:

root@host02:~# pveversion -v
proxmox-ve: 4.2-48 (running kernel: 4.4.6-1-pve)
pve-manager: 4.2-2 (running version: 4.2-2/725d76f0)
pve-kernel-4.4.6-1-pve: 4.4.6-48
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-39
qemu-server: 4.0-72
pve-firmware: 1.1-8
libpve-common-perl: 4.0-59
libpve-access-control: 4.0-16
libpve-storage-perl: 4.0-50
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-14
pve-container: 1.0-62
pve-firewall: 2.0-25
pve-ha-manager: 1.0-28
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve9~jessie

root@host02:~# cat /etc/pve/nodes/host/qemu-server/161.conf
agent: 1
boot: dcn
bootdisk: virtio0
cores: 4
ide2: none,media=cdrom
memory: 4096
name: lb2.hostname
net0: bridge=vmbr0,virtio=32:66:36:36:63:35
numa: 0
onboot: 1
ostype: l26
protection: 1
smbios1: uuid=b36025bd-5194-42dc-b249-9d2cbccdf6c5
sockets: 1
virtio0: zetavault-ssd:161/vm-161-disk-1.raw,iops_rd=1000,iops_wr=250,iothread=1,size=100G
 
please upgrade to a current version. do all of the affected disks have iothread enabled?
 
Fabian,

Upgrade doesn't help. Disabling iothread seems to be the solution. Confirmed this on 4 VM's now.

I'm not sure what the impact is of enabling/disabling iothread. What's your take on this?
 
iothread and drive mirroring don't work in qemu < 2.7. it is disabled in qemu-server since 4.0-86, which is available in all repositories. so upgrading does help (you get a descriptive error message when trying do migrate a disk with iothread=1 while the VM is running), but if you need online storage migration, you need to disable iothread. offline storage migration should be unaffected.
 
Hi...as per my knowledge Storage migration allows to move a virtual disk to another storage or to another virtual disk format on the same storage. Storage migration can be done on running virtual machines (but also works offline).By default, the source disk will be added as "unused disk" for safety. If you do not need this, just click "Delete source".

turnkey pcb assembly
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!