[SOLVED] migration error

holest

Member
Sep 6, 2013
25
6
23
Hello!

After my company finally bought 3 community subscriptions i started to upgrade proxmox servers one-by-one.
First i migrated vms from my second node (P2) to first node (P1) added licence and upgraded.
After that i started to migrate vms back to upgrade P1.
I succesfully migrated:
-2 debian 7x64
-1 freebsd
-1 windows xp x64
-1 windows xp x32
-1 windows 7 x64

At last I tryed to migrate our main virtual servert (ad/dns/file/etc Zentyal 3.3) but i got this error:

Code:
Feb 11 08:51:34 starting migration of VM 111 to node 'p2' (10.3.1.2)
Feb 11 08:51:34 copying disk images
Feb 11 08:51:34 starting VM 111 on remote node 'p2'
Feb 11 08:51:35 start failed: command '/usr/bin/kvm -id 111 -chardev 'socket,id=qmp,path=/var/run/qemu-server/111.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/111.vnc,x509,password -pidfile /var/run/qemu-server/111.pid -daemonize -name z1.nemzet.hu -smp 'sockets=2,cores=2' -nodefaults -boot 'menu=on' -vga cirrus -cpu kvm64,+x2apic,+sep -k hu -m 10240 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -drive 'file=/mnt/pve/nas/template/iso/zentyal-3.2-amd64.iso,if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/mnt/pve/prox1/images/111/vm-111-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap111i0,script=/var/lib/qemu-server/pve-bridge,vhost=on' -device 'virtio-net-pci,mac=2A:21:A9:FF:AE:0F,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -netdev 'type=tap,id=net1,ifname=tap111i1,script=/var/lib/qemu-server/pve-bridge' -device 'e1000,mac=0E:24:F6:5C:3A:2A,netdev=net1,bus=pci.0,addr=0x13,id=net1,bootindex=301' -machine 'type=pc-i440fx-1.4' -incoming tcp:localhost:60000 -S' failed: exit code 1
Feb 11 08:51:35 ERROR: online migrate failure - command '/usr/bin/ssh -o 'BatchMode=yes' root@10.3.1.2 qm start 111 --stateuri tcp --skiplock --migratedfrom p1 --machine pc-i440fx-1.4' failed: exit code 255
Feb 11 08:51:35 aborting phase 2 - cleanup resources
Feb 11 08:51:35 migrate_cancel
Feb 11 08:51:36 ERROR: migration finished with problems (duration 00:00:02)
TASK ERROR: migration problems

Before the proxmox update i could migrate the vm whitout any problem (last time i tryed to migrate without problem was at 2013 november).
What should i do? Should i try to upgrade p1 with zentyal server running live on it?

Holest
 
Last edited:
Hi,
/mnt/pve/prox1/images/111/vm-111-disk-1.qcow2 is also usable on the destination node?

Have the nodes the same CPUs?

What's about an short shutdown/migrate/boot? Takes normaly only an minute and often possibe during night times.

Udo
 
Hello!
/mnt/pve/prox1/images/111/vm-111-disk-1.qcow2 is also usable on the destination node?
/mnt/pve/prox1/images/111/vm-111-disk-1.qcow2 is on a shared NFS storage all of the proxmox nodes can reach it, and its working with all other virtual machines.
Have the nodes the same CPUs?
No, sadly we dont have same computers.
P1 (Fujitsu RX300) have E5645 CPU
P2 (Fujitsu TX200) have E5504 CPU
But as like as i said at the befora, all other virtual machines migration working.
What's about an short shutdown/migrate/boot? Takes normaly only an minute and often possibe during night times.
This one will be my next step at 2014.02.15. early in the morning (this is a newspaper company nearly 365/24/7)
Can the upgrade make this weird problem?

Thanks
Holest
 
But your two computers are using very different chipsets! E5645 is Westmere and E5504 is Nehalem. Using different chipsets might very well be the cause of your problems since the announced capabilities to the OS will be different. Some OS does not like this and especially windows server will must likely complain about this since the subscription is closely tied to the hardware.
 
I know its different but until now (licence+upgrade) everything (HA, offline, online migration) worked.
I don't really understand this, if i have a physical machine with X CPU and an another with Y CPU and the virtual machine's CPU type is kvm64 how the vm knows what is the physical machines CPU type?
 
If you have a VM running Linux try compare the output of 'cat /proc/cpuinfo |grep flags' on the VM when running on both computers.
 
Z1 on P1
Code:
root@z1:~# cat /proc/cpuinfo |grep flags
flags           : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat                                                        pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl pni cx16 x2ap                                                       ic hypervisor
flags           : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat                                                        pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl pni cx16 x2ap                                                       ic hypervisor
flags           : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat                                                        pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl pni cx16 x2ap                                                       ic hypervisor
flags           : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat                                                        pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl pni cx16 x2ap                                                       ic hypervisor
root@z1:~#

Z2 on P2
Code:
holest@z2:~$ cat /proc/cpuinfo |grep flags
flags           : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat                                                                                                                       pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl pni cx16 x2ap                                                                                                                      ic hypervisor
flags           : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat                                                                                                                       pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl pni cx16 x2ap                                                                                                                      ic hypervisor
flags           : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat                                                                                                                       pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl pni cx16 x2ap                                                                                                                      ic hypervisor
flags           : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat                                                                                                                       pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl pni cx16 x2ap                                                                                                                      ic hypervisor
holest@z2:~$

I don't really see any difference.
 
No:

Code:
root@p1:/etc/pve/qemu-server# cat 111.conf
bootdisk: virtio0
cores: 2
ide2: nas:iso/zentyal-3.2-amd64.iso,media=cdrom
machine: pc-i440fx-1.4
memory: 10240
name: z1.nemzet.hu
net0: virtio=2A:21:A9:FF:AE:0F,bridge=vmbr0
net1: e1000=0E:24:F6:5C:3A:2A,bridge=vmbr1010
ostype: l26
parent: update20140204
sockets: 2
virtio0: prox1:111/vm-111-disk-1.qcow2,format=qcow2,size=50G

[update20131221]
bootdisk: virtio0
cores: 2
ide2: nas:iso/zentyal-3.2-amd64.iso,media=cdrom
machine: pc-i440fx-1.4
memory: 10240
name: z1.nemzet.hu
net0: virtio=2A:21:A9:FF:AE:0F,bridge=vmbr0
net1: e1000=0E:24:F6:5C:3A:2A,bridge=vmbr1010
ostype: l26
snaptime: 1387617342
sockets: 2
virtio0: prox1:111/vm-111-disk-1.qcow2,format=qcow2,size=50G
vmstate: prox1:111/vm-111-state-update20131221.raw

[update20140116]
bootdisk: virtio0
cores: 2
ide2: nas:iso/zentyal-3.2-amd64.iso,media=cdrom
machine: pc-i440fx-1.4
memory: 10240
name: z1.nemzet.hu
net0: virtio=2A:21:A9:FF:AE:0F,bridge=vmbr0
net1: e1000=0E:24:F6:5C:3A:2A,bridge=vmbr1010
ostype: l26
parent: update20131221
snaptime: 1389869483
sockets: 2
virtio0: prox1:111/vm-111-disk-1.qcow2,format=qcow2,size=50G
vmstate: prox1:111/vm-111-state-update20140116.raw

[update20140120]
bootdisk: virtio0
cores: 2
ide2: nas:iso/zentyal-3.2-amd64.iso,media=cdrom
machine: pc-i440fx-1.4
memory: 10240
name: z1.nemzet.hu
net0: virtio=2A:21:A9:FF:AE:0F,bridge=vmbr0
net1: e1000=0E:24:F6:5C:3A:2A,bridge=vmbr1010
ostype: l26
parent: update20140116
snaptime: 1390227226
sockets: 2
virtio0: prox1:111/vm-111-disk-1.qcow2,format=qcow2,size=50G
vmstate: prox1:111/vm-111-state-update20140120.raw

[update20140130]
bootdisk: virtio0
cores: 2
ide2: nas:iso/zentyal-3.2-amd64.iso,media=cdrom
machine: pc-i440fx-1.4
memory: 10240
name: z1.nemzet.hu
net0: virtio=2A:21:A9:FF:AE:0F,bridge=vmbr0
net1: e1000=0E:24:F6:5C:3A:2A,bridge=vmbr1010
ostype: l26
parent: update20140120
snaptime: 1391096985
sockets: 2
virtio0: prox1:111/vm-111-disk-1.qcow2,format=qcow2,size=50G
vmstate: prox1:111/vm-111-state-update20140130.raw

[update20140204]
bootdisk: virtio0
cores: 2
ide2: nas:iso/zentyal-3.2-amd64.iso,media=cdrom
machine: pc-i440fx-1.4
memory: 10240
name: z1.nemzet.hu
net0: virtio=2A:21:A9:FF:AE:0F,bridge=vmbr0
net1: e1000=0E:24:F6:5C:3A:2A,bridge=vmbr1010
ostype: l26
parent: update20140130
snaptime: 1391525701
sockets: 2
virtio0: prox1:111/vm-111-disk-1.qcow2,format=qcow2,size=50G
vmstate: prox1:111/vm-111-state-update20140204.raw
root@p1:/etc/pve/qemu-server#
 
mmm
are you 100% sure nodes have identical packages and compatible setup, access to external/network storage (nas)
eg: virtio0: prox1: ?

compare to those vm that work (migrate)

btw never seen a similar .conf on my nodes, with all those [update20140204] ad similar sections... could be that part of the problem?

Marco
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!