Can't live migrate after dist-upgrade

Hi,

i have the same problem here,
is there a date around then the patch is available on the regular repository?
 
Hi,

i have the same problem here,
is there a date around then the patch is available on the regular repository?

Sorry, no -- on the qemu-devel mailing list, we are waiting for feedback from you, QEMU users, about actual migration results with this patch in place. That is, the maintainer with jurisdiction (Gerd Hoffmann) will pick up my patch only *after* users test it and report success. So, please do that -- from your comment above, it looks like you know your way around git; please consider applying the patch directly on your end, and building your own QEMU binary with it. If everything goes well, the patch can be in QEMU 2.6.

You can download the patch (for git-am) from https://patchwork.ozlabs.org/patch/584876/ , clicking the "mbox" link.

Please respond with your results in the following thread:

http://thread.gmane.org/gmane.comp.emulators.qemu/395014

Thanks
Laszlo
 
Hi, some news.

We have 2 bugs here :(

The one from qemu 2.5 (qemu 2.5->2.4)
The patch from Iersek has been applied to proxmox package, I have build it for you.
http://odisoweb1.odiso.net/pve-qemu-kvm_2.5-7_amd64.deb

But a bug has been introduce in qemu-server package some months ago,
and -machine flag is not send anymore during the migration
The patch is applied to proxmox git but package not yet release, you can download it here:

http://odisoweb1.odiso.net/qemu-server_4.0-56_amd64.deb

(restart /etc/init.d/pvedaemon after install)
 
Hi,

i am running the most current version 4.1-15 of pve-no-subscription repo.

Today I have installled the newest proxmox packages:pve-qemu-kvm_2.5-8_amd64.deb and qemu-server_4.0-62_amd64.deb
I now cannot migrate running vms from kvm 2.4 to 2.5. Even if I restart the vms and they are running kvm 2.5 I cannot migrate them to another kvm 2.5 host and I restarted all pve-daemons.

What packages/patches should I install to be able to migrate again ?
 
I'm not getting an error message, my guests simply freeze and become unresponsive...

It seems like the SSH tunnel issue from long ago. I think it's an ssh tunnel being closed before completely finishing the task. However setting migration_unsecure: 1 in datacenter.cfg doesn't seem to disable the tunnel. I'm saying this because when I migrate, the on screen log says it's opening an SSH tunnel.

If I put something like "migration_unsecure 1" without the colon the log says it's ignoring the flag... So it is reading the file...

I'm using the PVE-no-subscription repos as well. Does the enterprise repos have the same problem
 
Last edited:
So I tried apt-get update && upgrade on one of my servers, so it looks like this:

proxmox-ve: 4.1-37 (running kernel: 4.2.8-1-pve)
pve-manager: 4.1-13 (running version: 4.1-13/cfb599fb)
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.8-1-pve: 4.2.8-37
pve-kernel-4.2.0-1-pve: 4.2.0-13
pve-kernel-4.2.1-1-pve: 4.2.1-14
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.2.3-2-pve: 4.2.3-22
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-32
qemu-server: 4.0-55
pve-firmware: 1.1-7
libpve-common-perl: 4.0-48
libpve-access-control: 4.0-11
libpve-storage-perl: 4.0-40
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-5
pve-container: 1.0-44
pve-firewall: 2.0-17
pve-ha-manager: 1.0-21
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 0.13-pve3
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve7~jessie

The old ones in my cluster looks like this:

proxmox-ve: 4.1-34 (running kernel: 4.2.6-1-pve)
pve-manager: 4.1-5 (running version: 4.1-5/f910ef5c)
pve-kernel-4.2.6-1-pve: 4.2.6-34
pve-kernel-4.2.0-1-pve: 4.2.0-13
pve-kernel-4.2.1-1-pve: 4.2.1-14
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.2.3-2-pve: 4.2.3-22
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 0.17.2-1
pve-cluster: 4.0-30
qemu-server: 4.0-46
pve-firmware: 1.1-7
libpve-common-perl: 4.0-43
libpve-access-control: 4.0-11
libpve-storage-perl: 4.0-38
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.4-21
pve-container: 1.0-37
pve-firewall: 2.0-15
pve-ha-manager: 1.0-18
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-5
lxcfs: 0.13-pve3
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve7~jessie

I can't migrate debian VM from the old server to the new one (qemu 2.4.1 -> 2.5.0), it simply fails to start with "Failed to create message: Input/output error". Windows VM migration && startup runs fine.

I created the same debian VM on qemu 2.5.0 (the same configuration), it starts fine, but I am not able to migrate it back to the old hosts in the cluster (qemu 2.4.1)

Did you test it before putting it in the wild?

Any solution?
 
So I tried apt-get update && upgrade on one of my servers, so it looks like this:

proxmox-ve: 4.1-37 (running kernel: 4.2.8-1-pve)
pve-manager: 4.1-13 (running version: 4.1-13/cfb599fb)
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.8-1-pve: 4.2.8-37
pve-kernel-4.2.0-1-pve: 4.2.0-13
pve-kernel-4.2.1-1-pve: 4.2.1-14
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.2.3-2-pve: 4.2.3-22
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-32
qemu-server: 4.0-55
pve-firmware: 1.1-7
libpve-common-perl: 4.0-48
libpve-access-control: 4.0-11
libpve-storage-perl: 4.0-40
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-5
pve-container: 1.0-44
pve-firewall: 2.0-17
pve-ha-manager: 1.0-21
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 0.13-pve3
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve7~jessie

The old ones in my cluster looks like this:

proxmox-ve: 4.1-34 (running kernel: 4.2.6-1-pve)
pve-manager: 4.1-5 (running version: 4.1-5/f910ef5c)
pve-kernel-4.2.6-1-pve: 4.2.6-34
pve-kernel-4.2.0-1-pve: 4.2.0-13
pve-kernel-4.2.1-1-pve: 4.2.1-14
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.2.3-2-pve: 4.2.3-22
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 0.17.2-1
pve-cluster: 4.0-30
qemu-server: 4.0-46
pve-firmware: 1.1-7
libpve-common-perl: 4.0-43
libpve-access-control: 4.0-11
libpve-storage-perl: 4.0-38
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.4-21
pve-container: 1.0-37
pve-firewall: 2.0-15
pve-ha-manager: 1.0-18
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-5
lxcfs: 0.13-pve3
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve7~jessie

I can't migrate debian VM from the old server to the new one (qemu 2.4.1 -> 2.5.0), it simply fails to start with "Failed to create message: Input/output error". Windows VM migration && startup runs fine.

I created the same debian VM on qemu 2.5.0 (the same configuration), it starts fine, but I am not able to migrate it back to the old hosts in the cluster (qemu 2.4.1)

Did you test it before putting it in the wild?

Any solution?

you also need to update qemu-server package on source node.
(you can do dist-upgrade on source node, no impact on running vm)
 
Hi,

i am running the most current versions on all my servers and migration is not working.

Not from 2.4 to 2.5 and not from 2.5 to 2.5.

I have tried with and without migration_unsecure

What packages should I Install ? The qemu-server 4.0-62 from the repo or the older private patched versions mentioned above ?

proxmox-ve: 4.1-39 (running kernel: 4.2.8-1-pve)
pve-manager: 4.1-15 (running version: 4.1-15/8cd55b52)
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.8-1-pve: 4.2.8-39
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.2.3-2-pve: 4.2.3-22
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-33
qemu-server: 4.0-62
pve-firmware: 1.1-7
libpve-common-perl: 4.0-49
libpve-access-control: 4.0-11
libpve-storage-perl: 4.0-42
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-8
pve-container: 1.0-46
pve-firewall: 2.0-18
pve-ha-manager: 1.0-23
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve1
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: not correctly installed
 
When qemu-server is updated you need to make a full stop and restart of the VM to use the new qemu version or, if possible, make a live migration.
 
Hi,
my problem is: I have stopped and started the virtual machine
I have double checked in qemu monitor with info version that my vm is
2.5.0pve-qemu-kvm_2.5-8
The Server I migrate to also has the same kvm version
Still I cannot do live migrations.
I receive no useful debug info, just:
Mär 01 12:53:26 starting migration of VM 192 to node 'san02' (213.164.159.135)
Mär 01 12:53:26 copying disk images
Mär 01 12:53:26 starting VM 192 on remote node 'san02'
Mär 01 12:53:29 starting ssh migration tunnel
Mär 01 12:53:30 starting online/live migration on 213.164.159.135:60000
Mär 01 12:53:30 migrate_set_speed: 8589934592
Mär 01 12:53:30 migrate_set_downtime: 0.1
Mär 01 12:53:32 ERROR: online migrate failure - aborting
Mär 01 12:53:32 aborting phase 2 - cleanup resources
Mär 01 12:53:32 migrate_cancel
Mär 01 12:53:34 ERROR: migration finished with problems (duration 00:00:08)
TASK ERROR: migration problems
What can I do to be able to live migrate ?
 
Hi,
my problem is: I have stopped and started the virtual machine
I have double checked in qemu monitor with info version that my vm is
2.5.0pve-qemu-kvm_2.5-8
What can I do to be able to live migrate ?

what is the version of qemu-server package (source and destination) ?
 
what is the version of qemu-server package (source and destination) ?
Hi,

my source is: qemu-server: 4.0-62
my dest is: qemu-server: 4.0-62

I had problems with migration also before update. Don't know why, firewall allows everything, net is 10G, Ram is quite high 102 G, Storage is Ceph, Cache is Writeback, mtu is 9000

Maybe something in my setup is special ?
 
Hi,

my source is: qemu-server: 4.0-62
my dest is: qemu-server: 4.0-62

I had problems with migration also before update. Don't know why, firewall allows everything, net is 10G, Ram is quite high 102 G, Storage is Ceph, Cache is Writeback, mtu is 9000

Maybe something in my setup is special ?

can you post the kvm command line on the source node

"ps -aux|grep "kvm -id <vmid>"

and we you start the migration, try to catch the same command line on target server. (you need to catch it after migration start and before migration stop)
 
Okay here we go:

source
/usr/bin/kvm -id 192 -chardev socket,id=qmp,path=/var/run/qemu-server/192.qmp,server,nowait -mon chardev=qmp,mode=control -pidfile /var/run/qemu-server/192.pid -daemonize -smbios type=1,uuid=d028dd37-f2ab-43a1-8c17-e3fe14913161 -name cephtest1 -smp 4,sockets=1,cores=4,maxcpus=4 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000 -vga cirrus -vnc unix:/var/run/qemu-server/192.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 2048 -object memory-backend-ram,size=2048M,id=ram-node0 -numa node,nodeid=0,cpus=0-3,memdev=ram-node0 -k de -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -chardev socket,path=/var/run/qemu-server/192.qga,server,nowait,id=qga0 -device virtio-serial,id=qga0,bus=pci.0,addr=0x8 -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:fecb4d348f19 -drive file=rbd:rbd/vm-192-disk-1:mon_host=10.67.1.11 10.67.1.14 10.67.1.15 10.67.1.18 10.67.1.20:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/cephpool.keyring,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=threads,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -drive if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -netdev type=tap,id=net0,ifname=tap192i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=4E:42:04:F0:0C:3F,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300

and destination:

/usr/bin/kvm -id 192 -chardev socket,id=qmp,path=/var/run/qemu-server/192.qmp,server,nowait -mon chardev=qmp,mode=control -pidfile /var/run/qemu-server/192.pid -daemonize -smbios type=1,uuid=d028dd37-f2ab-43a1-8c17-e3fe14913161 -name cephtest1 -smp 4,sockets=1,cores=4,maxcpus=4 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000 -vga cirrus -vnc unix:/var/run/qemu-server/192.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 2048 -object memory-backend-ram,size=2048M,id=ram-node0 -numa node,nodeid=0,cpus=0-3,memdev=ram-node0 -k de -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -chardev socket,path=/var/run/qemu-server/192.qga,server,nowait,id=qga0 -device virtio-serial,id=qga0,bus=pci.0,addr=0x8 -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:3b748ad9295 -drive if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=rbd:rbd/vm-192-disk-1:mon_host=10.67.1.11 10.67.1.14 10.67.1.15 10.67.1.18 10.67.1.20:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/cephpool.keyring,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=threads,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap192i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=4E:42:04:F0:0C:3F,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 -machine type=pc-i440fx-2.5 -incoming tcp:213.164.159.135:60000 -S
 
Ok, so it seem to be ok for the command line.

to have more info, you can try to do migration manually.

on the target host, copy/paste the same command line, but remove " -daemonize". (also try to escape -drive '....').
This will launch the target vm in foreground, waiting for migration.


then, on the gui of the sourcevm , in the monitor panel, type

migrate tcp:213.164.159.135:60000.

If the migration hang, you'll see the error in targetvm process.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!