High CPU load for Windows 10 Guests when idle

Jun 13, 2018
7
1
8
48
Hi,

I've some Windows 10 1803 guests on my clusters. They're totally idle (according to the windows process manager) but on the host side they consume a lot of CPU, between 6% and 15%.

All the windows installations are fresh installations, with tablet disabled, ballon and qemu guest agent. I've disabled all options (balloning, guest agent, ...), but the problem is still present.

I don't understand why the idle windows VM are consuming a lot of CPU. The problem is not present on linux guests.
 
I've never found a solution for this either. My Win10 VMs when idle report 1-2% CPU usage; host reports the VM usage as not less than 15% and bounces as high as 22% for the same guest usage at 1-2%
 
I have the same Problem and a solution (for me).
I post it here in the german subforum: https://forum.proxmox.com/threads/windows-10-hohe-idle-cpu-auslastung.44530/

My Win10 Guest already has ~16% Cpu Load @idle.
I changed the OS Type to "other" and now it hast ~3% idle load.
This kvm options fail with set to other:
-no-hpet
driftfix=slew
-global kvm-pit.lost_tick_policy=discard

You can test this and report back.
 
Yes that is it! I had never seen this suggestion; thanks loomes!

Like the OP points out, the only suggestions I have found include removing balloon, agent, and qemu tablet; with no success.

I have Balloon, agent, and qemu tablet enabled and OS type change to other my CPU usage is much more accurate and reporting less than 3% when idle on host.

Perfect!
 
Thanks for your replies !

I've tested with the option 'Other OS'. It works with a couple of Win10 1803 virtual machines, but not all.

I'll investigate why.

But my first conclusions are :
  • Windows sucks ^_^.
  • Windows lies about the real CPU usage.
 
  • Like
Reactions: Bent
Hmmm after some tests, the option Other OS seems not working ...

I've many Windows 10 1803 guest that comsume a lot of CPU on the PVE hosts ...

Arghhhh
 
Just wanted to let you know that for me, hpet fixed it.

Even tough I'm not directly running proxmox it seems like an appropriate fix for several affected systems.

I have two similar, if not equal, debian systems running - both among other things running win10 vms.
After setting hpet to yes, cpu idle load went back to pre-1803 levels.

my current xml content (excerpt, sorry for lack of formatting):

<clock offset='localtime'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='yes'/>
<timer name='hypervclock' present='yes'/>
</clock>
 
I would like to try this patch out and give some feedback for out network, but what file exactly do you insert this into? I don't think it is the qm.conf file? To add a little more to this, while the idol CPU usage is high and this makes everything on the VMs get much higher then they should be, the primary issues that the users are having are opening or saving files on mapped network drives through applications. For example saving a PDF from Outlook to a mapped network drive the explorer window may lockup or get greyed out for several second in between finding the location, naming the file and then finally saving it. None of the 1709 machines have this issue....
 
Last edited:
Just wanted to let you know that for me, hpet fixed it.

Interesting.

proxmox use -no-hpet when windows os => 2008/vista

if somebody can test with editing

/usr/share/perl5/PVE/QemuServer.pm

and comment

if ($winversion >= 6) {
push @$globalFlags, 'kvm-pit.lost_tick_policy=discard';
# push @$cmd, '-no-hpet';
}

then

systemctl reload pvedaemon

and stop/start the vm, to see if it's reduce cpu.


Edit:
I found a thread on the net, with a reference to this forum post.
https://askubuntu.com/questions/103...after-upgrading-vm-to-windows-10-1803/1047397

seem that it's helping for others users too.
 
@spirit

Thanks for the Tip, works perfect.
Comment the lines, switch the machine back to OS = win10 and all is good.
Idle load after boot without login 0,77%
 
@loomes

can you try another thing:

keep no-hpet, and change this (same QemuServer.pm file)

if ($winversion >= 7) {
push @$cpuFlags , 'hv_relaxed';
}

to
if ($winversion >= 7) {
push @$cpuFlags , 'hv_relaxed';
push @$cpuFlags , 'hv_synic';
push @$cpuFlags , 'hv_stimer';

}

then restart pvedaemon and stop/start vm .
 
Last edited:
  • Like
Reactions: delebru
@spirit

The second one is also working :)
Idle Load is better 0,56%
Code:
/usr/bin/kvm -id 120 -name win10 -chardev socket,id=qmp,path=/var/run/qemu-server/120.qmp,server,nowait -mon chardev=qmp,mode=control -pidfile /var/run/qemu-server/120.pid -daemonize -smbios type=1,uuid=aca1c83e-b6e5-46a3-9f9a-ca7e3b124885 -smp 4,sockets=1,cores=4,maxcpus=4 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg -vga qxl -vnc unix:/var/run/qemu-server/120.vnc,x509,password -no-hpet -cpu host,+pcid,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer -m 4096 -k de -device vfio-pci,sysfsdev=/sys/bus/pci/devices/0000:00:02.0/b3dfe34f-0fdf-4321-8d03-42c4267ab5e1 -device intel-hda,id=sound5,bus=pci.0,addr=0x18 -device hda-micro,id=sound5-codec0,bus=sound5.0,cad=0 -device hda-duplex,id=sound5-codec1,bus=sound5.0,cad=1 -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -chardev socket,path=/var/run/qemu-server/120.qga,server,nowait,id=qga0 -device virtio-serial,id=qga0,bus=pci.0,addr=0x8 -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -spice tls-port=61000,addr=127.0.0.1,tls-ciphers=HIGH,seamless-migration=on -device virtio-serial,id=spice,bus=pci.0,addr=0x9 -chardev spicevmc,id=vdagent,name=vdagent -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -iscsi initiator-name=iqn.1993-08.org.debian:01:c24f787f2cb8 -device virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5 -drive file=/dev/pve/vm-120-disk-1,if=none,id=drive-scsi0,cache=writeback,discard=on,format=raw,aio=threads,detect-zeroes=unmap -device scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100 -netdev type=tap,id=net0,ifname=tap120i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=66:6E:FC:77:C9:85,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 -rtc driftfix=slew,base=localtime -global kvm-pit.lost_tick_policy=discard
 
@spirit

The second one is also working :)
Idle Load is better 0,56%
Code:
/usr/bin/kvm -id 120 -name win10 -chardev socket,id=qmp,path=/var/run/qemu-server/120.qmp,server,nowait -mon chardev=qmp,mode=control -pidfile /var/run/qemu-server/120.pid -daemonize -smbios type=1,uuid=aca1c83e-b6e5-46a3-9f9a-ca7e3b124885 -smp 4,sockets=1,cores=4,maxcpus=4 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg -vga qxl -vnc unix:/var/run/qemu-server/120.vnc,x509,password -no-hpet -cpu host,+pcid,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer -m 4096 -k de -device vfio-pci,sysfsdev=/sys/bus/pci/devices/0000:00:02.0/b3dfe34f-0fdf-4321-8d03-42c4267ab5e1 -device intel-hda,id=sound5,bus=pci.0,addr=0x18 -device hda-micro,id=sound5-codec0,bus=sound5.0,cad=0 -device hda-duplex,id=sound5-codec1,bus=sound5.0,cad=1 -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -chardev socket,path=/var/run/qemu-server/120.qga,server,nowait,id=qga0 -device virtio-serial,id=qga0,bus=pci.0,addr=0x8 -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -spice tls-port=61000,addr=127.0.0.1,tls-ciphers=HIGH,seamless-migration=on -device virtio-serial,id=spice,bus=pci.0,addr=0x9 -chardev spicevmc,id=vdagent,name=vdagent -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -iscsi initiator-name=iqn.1993-08.org.debian:01:c24f787f2cb8 -device virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5 -drive file=/dev/pve/vm-120-disk-1,if=none,id=drive-scsi0,cache=writeback,discard=on,format=raw,aio=threads,detect-zeroes=unmap -device scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100 -netdev type=tap,id=net0,ifname=tap120i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=66:6E:FC:77:C9:85,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 -rtc driftfix=slew,base=localtime -global kvm-pit.lost_tick_policy=discard

Ok, Great ! Thanks for testing.

I think it's better this way, keeping hpet disabled. (we had time drift problem with hpet in past).
I'll contact the proxmox team to see how we can implemented this.
 
Ok, Great ! Thanks for testing.

I think it's better this way, keeping hpet disabled. (we had time drift problem with hpet in past).
I'll contact the proxmox team to see how we can implemented this.

can also confirm lower idle load.
 
@loomes

can you try another thing:

keep no-hpet, and change this (same QemuServer.pm file)

if ($winversion >= 7) {
push @$cpuFlags , 'hv_relaxed';
}

to
if ($winversion >= 7) {
push @$cpuFlags , 'hv_relaxed';
push @$cpuFlags , 'hv_synic';
push @$cpuFlags , 'hv_stimer';

}

then restart pvedaemon and stop/start vm .

This did it for me as well, with the latest updates installed etc.

Thanks!

EDIT:
Was forced to disable hpet as well ..

if ($winversion >= 6) {
push @$globalFlags, 'kvm-pit.lost_tick_policy=discard';
# push @$cmd, '-no-hpet';
}
 
Last edited:
Thanks for your replies !

I've tested with the option 'Other OS'. It works with a couple of Win10 1803 virtual machines, but not all.

I'll investigate why.

But my first conclusions are :
  • Windows sucks ^_^.
  • Windows lies about the real CPU usage.


Yes, try to monitor your any Win with snmp, and snmp will show another story about cpu usage.
 
I can confirm Spirit's solution as well. All Windows 10 1803 clients move down to lower than 1% CPU load.
@proxmox team: are there any drawbacks? Is this a subject to change for a bugfix release?
 
Just chiming in that I've been seeing this issue as well. I've also been experiencing 99% CPU usage on a Windows 8.1 guest, but I get the feeling that might be a different issue, so will make a different thread since changing the system type to "other" didn't make a difference.
 
@spirit can confirm by testing following post #9 that for Windows 10 cpu load was nearly same in proxmox gui as in guest.
Without it I had 40-50% cpu load in proxmox gui while idle guest.
Tested also with Windows 7, but haven't seen a clear mismatch with or without the adjustment in the script.

Also tested separately the adjustments from post #11, similar result, possibly a bit lower load.
 
Last edited:
The settings from post #11 are implemented in qemu-server (5.0-35), but are not yet used.
It seems we will have to wait until QEMU 3.0 arrives in Proxmox.
Probably more testing has to be done.

From changelog:
* add hv_synic and hv_stimer HyperV Enlightment flags on QEMU 3.0 and later
 
  • Like
Reactions: AlexLup

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!