how to fix high CPU usage n KVM virtual machines

falves1

Well-Known Member
Jan 11, 2009
99
3
48
Last edited:
it's has been fixed in last proxmox updates and pve-qemu 2.12. (just stop/start vm after update), using hyperv hv_sync && hv_stimer enlightments.

/usr/share/perl5/PVE/QemuServer.pm

if (qemu_machine_feature_enabled ($machine_type, $kvmver, 2, 12)) {
push @$cpuFlags , 'hv_synic';
push @$cpuFlags , 'hv_stimer';
}
 
I am fully updated, and have a license. But last night it was not updated.
How do I know why I am missing this?
apt update
Get:1 http://security.debian.org stretch/updates InRelease [94.3 kB]
Ign:2 http://ftp.us.debian.org/debian stretch InRelease
Hit:3 https://packages.microsoft.com/debian/9/prod stretch InRelease
Hit:4 http://ftp.us.debian.org/debian stretch-updates InRelease
Hit:5 http://ftp.us.debian.org/debian stretch Release
Hit:6 http://mirrors.accretive-networks.net/mariadb/repo/10.3/debian stretch InRelease
Hit:7 https://enterprise.proxmox.com/debian/pve stretch InRelease
Get:8 http://security.debian.org stretch/updates/main amd64 Packages [459 kB]
Get:9 http://security.debian.org stretch/updates/contrib amd64 Packages [1760 B]
Fetched 555 kB in 1s (485 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
 
pveversion -v
proxmox-ve: 5.4-1 (running kernel: 4.15.18-15-pve)
pve-manager: 5.4-6 (running version: 5.4-6/aa7856c5)
pve-kernel-4.15: 5.4-3
pve-kernel-4.15.18-15-pve: 4.15.18-40
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-13-pve: 4.15.18-37
pve-kernel-4.15.18-12-pve: 4.15.18-36
pve-kernel-4.15.18-11-pve: 4.15.18-34
pve-kernel-4.15.18-10-pve: 4.15.18-32
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.15.18-7-pve: 4.15.18-27
pve-kernel-4.15.18-5-pve: 4.15.18-24
pve-kernel-4.15.18-4-pve: 4.15.18-23
pve-kernel-4.15.18-2-pve: 4.15.18-21
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-10
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-52
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-13
libpve-storage-perl: 5.0-43
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-3
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-37
pve-container: 2.0-39
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-6
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-2
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-52
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
 
I checked and I have the fix, but my CPU is still very high, on otherwise idle virtual machines.
one of the VMs is Linux, I guess 6 (Mikrotik virtual router)
 
Hello,

Managed to solve the problem. It's an issue related to graphics driver after the 1903 update and it's a known issue.
The issue manifests after initiating an RDP session, then exiting the session without the logout of the user.

To replicate the issue:
  1. Create a Windows 10 1903 VM (CPU should be at ~0-1% idle)
  2. Access the VM via RDP connection
  3. Close the connection (CPU should be at ~20-30% idle, depending on the number of sessions and CPU settings)
Details: https://answers.microsoft.com/en-us...em-after/dbce0938-60c5-4051-81ef-468e51d743ab

The solution

As a workaround on all of my affected machines I have used Group Policy Editor to set:

Code:
Local Computer Policy
⌞ Computer Configuration
 ⌞ Administrative Templates
  ⌞ Windows Components
   ⌞ Remote Desktop Service
    ⌞ Remote Desktop Session Host
     ⌞ Remote Session Environment
      ⌞ Use WDDM graphics display driver for Remote Desktop Connections

to DISABLED

This forces RDP to use the old (and now deprecated XDDM drivers).

After the reboot, the idle should go to the normal 0-1%.
 
I have a windows server 2016 and the fix does not work, I am still at 130%.
Kindly look the image. This is so bad that I am looking for another solution. I have many windows vms.
 

Attachments

  • remote-desktop.png
    remote-desktop.png
    499.7 KB · Views: 25
@falves1
It may be the case that your issue is related to something else.
  1. Have you tried looking into the processes that consume the CPU?
  2. Is the usage happening during idling or all the time?
  3. Specifically, does it happen after you connect and disconnect via RDP or maybe it's unrelated to RDP?
 
That is the issue: the virtual machine is idle, nothing is eating the CPU, just remote desktop. This seems to be a KVM issue.


81239 root 20 0 32.468g 0.031t 9144 S 206.7 2.5 1742:51 /usr/bin/kvm -id 104 -name FedericoWindows2016
81239 ? Rl 1743:45 /usr/bin/kvm -id 104 -name FedericoWindows2016 -chardev socket,id=qmp,path=/var/run/qemu-server/104.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon chardev=qmp-event,mode=control -pidfile /var/run/qemu-server/104.pid -daemonize -smbios type=1,uuid=449f474c-c776-4c53-81a8-2a574b32283f -smp 2,sockets=1,cores=2,maxcpus=2 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg -vnc unix:/var/run/qemu-server/104.vnc,x509,password -no-hpet -cpu host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer -m 32000 -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device VGA,id=vga,bus=pci.0,addr=0x2 -chardev socket,path=/var/run/qemu-server/104.qga,server,nowait,id=qga0 -device virtio-serial,id=qga0,bus=pci.0,addr=0x8 -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:f8f3ad9164fd -drive if=none,id=drive-ide0,media=cdrom,aio=threads -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=200 -drive file=/nfs1/images/104/vm-104-disk-1.qcow2,if=none,id=drive-virtio0,cache=writeback,format=qcow2,aio=threads,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap104i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=D2:EC:32:08:27:57,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 -rtc driftfix=slew,base=localtime -machine type=pc -global kvm-pit.lost_tick_policy=discard
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!