[SOLVED] TASK ERROR: migration problems

omegared77

New Member
Feb 4, 2019
16
0
1
46
Good morning,

I can't migrate a VM from a host to another when it's powered on only (with a powered off VM it's working fine).
Here's the log :

2019-06-03 11:17:49 starting migration of VM 114 to node 'btz-pve2103' (192.168.32.62)
2019-06-03 11:17:49 copying disk images
2019-06-03 11:17:49 starting VM 114 on remote node 'btz-pve2103'
2019-06-03 11:17:50 start failed: command '/usr/bin/kvm -id 114 -name nagios -chardev 'socket,id=qmp,path=/var/run/qemu-server/114.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/114.pid -daemonize -smbios 'type=1,uuid=062ae971-41ad-4222-8428-c12c4abe9d5a' -smp '2,sockets=2,cores=1,maxcpus=2' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc unix:/var/run/qemu-server/114.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 2048 -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'vmgenid,guid=8ca7bdcc-75a2-4768-958f-12b70133e35b' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:b43c54c174c7' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/VG-VM/vm-114-disk-0,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap114i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' -device 'e1000,mac=A2:56:1F:BD:4B:CA,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -machine 'type=pc-i440fx-3.0' -incoming unix:/run/qemu-server/114.migrate -S' failed: exit code 1
2019-06-03 11:17:50 ERROR: online migrate failure - command '/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=btz-pve2103' root@192.168.32.62 qm start 114 --skiplock --migratedfrom btz-pve2301 --migration_type secure --stateuri unix --machine pc-i440fx-3.0' failed: exit code 255
2019-06-03 11:17:50 aborting phase 2 - cleanup resources
2019-06-03 11:17:50 migrate_cancel
2019-06-03 11:17:50 ERROR: migration finished with problems (duration 00:00:02)
TASK ERROR: migration problems
 
Have you marked your storage as 'shared' even though it is not actually shared? Please post your storage config (/etc/pve/storage.cfg) and 'pveversion -v'.
 
Mira,

I neved subscribed.

My storage is shared :
upload_2019-6-3_11-41-58.png

I have 4 nodes, and the issue only on this one
 
Sorry for the late reply.
Here's the information :

cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content vztmpl,rootdir,iso,backup,images
maxfiles 10
shared 0

iscsi: MSA1050
portal 192.168.10.20
target iqn.2015-11.com.hpe:storage.msa1050.18103c0a59
content none

lvm: VM
vgname VG-VM
base MSA1050:0.0.0.scsi-3600c0ff0003be549f1358a5c01000000
content images,rootdir
shared 1

pveversion -v
proxmox-ve: 5.4-1 (running kernel: 4.15.18-11-pve)
pve-manager: 5.4-5 (running version: 5.4-5/c6fdb264)
pve-kernel-4.15: 5.4-2
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-11-pve: 4.15.18-34
pve-kernel-4.15.18-9-pve: 4.15.18-30
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-9
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-51
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-42
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
openvswitch-switch: 2.7.0-3
proxmox-widget-toolkit: 1.0-26
pve-cluster: 5.0-37
pve-container: 2.0-37
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-20
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-2
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-51
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
 
Are all nodes running the exact same version? (same 'pveversion -v' output)
Can you also post the 'start log' on the target node when it fails? It might contain some more information as to why it can't start it there.
 
I currently have 3 hosts
Host1 : pve-manager/5.3-5
Host2 : pve-manager/5.3-11
Host3 : pve-manager/5.4-5

I can't migrate running Vms from host3 to any other.

Today I noticed that the following worked :
- stop a VM on host3
- migrate it to host1 -> OK
- start it on host 1 -> OK
- migrate it back to host3 -> ok

Now I can migrate it from host3 to any host !
Is it somewthing like I had to stop the VM to apply something in its config ?
 
That is to be expected. We don't support migrating from a newer version to an older one.
Please keep the versions in sync in a cluster.
 
Ok, that makes sense.
It means that anytime I update, I have to empty the host first, and migrate the VMs while updated.