F 
		
				
				
			
		Frido Roose
Guest
I'll start a new thread about online migration, because I think it is not related with http://forum.proxmox.com/threads/7213-Live-migration-reliability
After adding the sleep as a workaround for the 'stopped' vm after an online migration, there are still conditions where the VM does not recover from a migration.
In this case, the VM cpu usage goes up to 100% (for every core, in both cases single as multi-core vms).
On the host, the kvm process also utilizes 100% cpu:
	
	
	
		
I did a strace on the process while this was happening, and only got this output:
	
	
	
		
It's not always reproducible, but I have the impression that putting IO load in the guest (like running fio or mysql sysbench) increases the chances for this to happen.
				
			After adding the sleep as a workaround for the 'stopped' vm after an online migration, there are still conditions where the VM does not recover from a migration.
In this case, the VM cpu usage goes up to 100% (for every core, in both cases single as multi-core vms).
On the host, the kvm process also utilizes 100% cpu:
		Code:
	
	    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                                                                        
   3683 root      20   0  654m 150m 2176 R 99.9  1.9   1:39.51 kvm                                                                                                                                                                            
   3669 root      20   0  654m 150m 2176 S  3.0  1.9   0:03.57 kvmI did a strace on the process while this was happening, and only got this output:
		Code:
	
	root@yamu:~# strace -p 3683
Process 3683 attached - interrupt to quit
rt_sigtimedwait([BUS RT_6], 0x7ff4e4f87dc0, {0, 0}, 8) = -1 EAGAIN (Resource temporarily unavailable)
rt_sigpending([])                       = 0
ioctl(18, KVM_RUN^C <unfinished ...>
Process 3683 detachedIt's not always reproducible, but I have the impression that putting IO load in the guest (like running fio or mysql sysbench) increases the chances for this to happen.
 
	 
	 and in all honesty, i think openvz is still faster than kvm (of course if you do not require full virtualization). as a drawback, of course, you will have a possibility that doing something in the container would make your kernel running around in panic, but these bugs are finally getting fixed, so i don't have any crashes with the latest proxmox kernel (yes, i know, there is one issue still left, but it does not affect me right now, thanks devil).
 and in all honesty, i think openvz is still faster than kvm (of course if you do not require full virtualization). as a drawback, of course, you will have a possibility that doing something in the container would make your kernel running around in panic, but these bugs are finally getting fixed, so i don't have any crashes with the latest proxmox kernel (yes, i know, there is one issue still left, but it does not affect me right now, thanks devil).