Hi all,
For future reference: I found this thread:
http://forum.proxmox.com/threads/13137-High-CPU-use-%2818-%29-by-idle-KVMs
This seems to have fixed my issue (in VM config file):
Best regards,
Koen
Quite a bit of fluctuation still when using multiple cores, not so with 1 core.
root@br-app7:~# ping 10.10.0.10
PING 10.10.0.10 (10.10.0.10) 56(84) bytes of data.
64 bytes from 10.10.0.10: icmp_req=1 ttl=64 time=0.302 ms
64 bytes from 10.10.0.10: icmp_req=2 ttl=64 time=0.227 ms
64 bytes from...
This seems to help. There is much less fluctuation in the latency when using 1cpu, 1core. That being said, it's still not as stable as the others (deviance of +/- 1.0msec while +/-0.2 on others).
Hi,
Thanks for the quick reply. Unfortunately this didn't seem to help much:
Good VM:
root@br-app2:~# uname -a
Linux br-app2 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux
root@br-app2:~# ping -c8 10.10.0.1
PING 10.10.0.1 (10.10.0.1) 56(84) bytes of data.
64 bytes from...
Hi,
I'm currently facing some strange network latency issues on 2 of my Proxmox hosts. My current setup:
Cluster1: 5 hosts, latest stable PVE
Cluster2: 2 hosts, 1 latest stable, 1 latest PVEtest (upgraded to test repo to see if the problem went away, no luck)
On these clusters I have, among...
When you say 'If HA migrates', does that mean you simulate a failure of the host where the VM is running? If so, it might be different because the VM is actually started again on the new node (not live migration). Can you confirm?
/K
Hi,
I had something 'similar' with Windows 2008 VM's a while back. I had to change the display adapter to type "Cirrus Logic", that's the only way I could stop the VM's from being rebooted.
The problem manifested mostly when trying to access the console, but also random Windows reboots every...
Consider it tested, just live migrated about 40 VM's away from beta3 nodes to rc1 nodes in order to complete the upgrades. No issues encountered (just commented the kill line (line 51 in QemuMigrate.pm). No reboots were necessary.
/K
no luck, not with host or Westmere model. AES is now enabled so that's not the culprit. Also this was tested on a fully upgraded server (aptitude update && aptitude dist-upgrade)
Edit: just noticed something else: I get this on the console of the server:
kvm: 6236: cpu0 unhandled wrmsr...
Duly noted, this was obvious from reading the entire thread as well.
I'm simply asking if anybody knows if the provided workaround can work without rebooting the node or the VM's on it.
/K
Your point being?
I was merely pointing out that live migration is mostly used to be able to do upgrades without bringing down VM's, so telling me that you shouldn't and even can't use it for this purpose is simply bollocks, even when it's a beta. The source of my issue is clearly the ssh...
So my question remains, is there any way to avoid having to reboot the node to make the changes to the perl files active? Restart of some service perhaps?
Also, another thought relating to the post of Udo: what's the purpose of live migration if not for disruption-free upgrades?
Cheers,
Koen
I saw the same thing with the usermod error, then switched to Proxmox VE authentication.
Didn't have to do the NFS stuff though (beta3 installed through ISO).
/K
I remember reading a thread about SSH transfer speeds and an AES cipher to speed that up, but you need the right processor for that. Also a patched version of SSH and a library if I recall correctly.
If your CPU doesn't have aes it's normal that it BSOD's. My CPU's however do support AES (E5645)...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.