[SOLVED] VM not migrated automatically

Ahmed_F

New Member
Jun 5, 2017
12
0
1
33
Hello,

I have a new cluster with 3 nodes with version 4.4 and trying to get my VM's migrated automatically if the node goes down.
I am using ceph as storage.
I simulated a kernel panic and my VM's didn't migrate.
It there something special I have to do to enable the automatic migration please?

Thank you,
 

Ahmed_F

New Member
Jun 5, 2017
12
0
1
33
Thanks mate! That helped me alot.
But I noticed that when I add the VM to HA it is not able to start, I got those logs :

Code:
task started by HA resource agent
TASK ERROR: start failed: command '/usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/100.pid -daemonize -smbios 'type=1,uuid=b7b23152-f1b8-4222-81f3-f4cb898bc262' -name Debian -smp '1,sockets=1,cores=1,maxcpus=1' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga cirrus -vnc unix:/var/run/qemu-server/100.vnc,x509,password -cpu qemu64 -m 'size=1024,slots=255,maxmem=4194304M' -object 'memory-backend-ram,id=ram-node0,size=1024M' -numa 'node,nodeid=0,cpus=0,memdev=ram-node0' -k fr -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:53fdfc6de0' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=rbd:VMs/vm-100-disk-1:mon_host=10.0.0.10;10.0.0.11:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/vms_storage.keyring,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=82:5D:01:C0:51:8E,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -machine 'accel=tcg'' failed: got timeout


Code:
TASK ERROR: VM 100 qmp command 'system_reset' failed - unable to connect to VM 100 qmp socket - timeout after 31 retries


If I remove it from ha, it starts normally.

Any idea please ?

(Sorry I am just starting with Proxmox)
 

fortechitsolutions

Well-Known Member
Jun 4, 2008
390
37
48
Hi, just to confirm

- live migration works as expected as a starting baseline?
- the ceph storage pool is tagged as type 'shared' in the storage manager WebUI ?

In my experience, if you start with <cluster of 3 nodes> and <live migrate works as expected as starting baseline> then <HA works with very little extra effort>


Tim
 

Ahmed_F

New Member
Jun 5, 2017
12
0
1
33
I just noticed that one of my hypervisor was not well configure with CEPH.
I fixed that issue and now my VM is getting migrated as soon as thee hypervisor goes down.
Thank you for your help
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!