High CPU load for Windows 10 Guests when idle

Perhaps a silly question, but would it be advisable to simply disable HPET in the BIOS of the host until this is fixed?
 
Update - out of curiosity it looks like you can't disable HPET on modern HP business grade hardware, so there's that. Will the workarounds in this thread potentially interfere with fixes in the future? Also, has anyone tried Windows 10 1809 yet?
 
On a HP DL120G7 E3-1220 running proxmox 5.2-9, my idling 2 core windows 10 Pro 1803 VM gets :
- ~15% cpu usage with OS type Windows 10/2016
- ~3% cpu usage with OS type Other
- ~1-1.5% cpu usage with OS type Windows 10/2016 and no-hpet commented out (post #9)
- ~0.8-1% cpu usage with OS type Windows 10/2016 and hv_synic and hv_stimer flags (post #11)

No drawback as far as I can tell.

Is there any reason why proxmox has not yet implemented the patch from post #11 ?
 
@loomes

can you try another thing:

keep no-hpet, and change this (same QemuServer.pm file)

if ($winversion >= 7) {
push @$cpuFlags , 'hv_relaxed';
}

to
if ($winversion >= 7) {
push @$cpuFlags , 'hv_relaxed';
push @$cpuFlags , 'hv_synic';
push @$cpuFlags , 'hv_stimer';

}

then restart pvedaemon and stop/start vm .

This worked for me too! Win10 VM on idle was averaging 14% CPU, dropped down to 2%. Thanks!
 
  • Like
Reactions: AlexLup
HI all,
just reporting a similar case with Windows 10 guests that I solved recently.
The win10 guests suddenly started eating 100% of one of the assigned cores as soon as the user closed the Remote Desktop *window*, and stayed in that condition until the next RDP logon.
If instead the user "disconnects" from the Windows session, the RDP client will close and the guest will consume no extra CPU.

Assuming you're using the Remote Desktop client, you may double check on how you logoff from your RDP session. If you simply click the "X" button on the RDP window, chances are you are triggering that weird behavior.

My 2 cents.
Regards,
Marco
 
Hi all,

we too had the problem with a Win 10 Installation going 100% CPU after a while the user logged out from RDP. We solved by updating the virtio drivers from the virtio-win-0.1.171.iso. Before we had the drivers installed from virtio-win-0.1.141.iso.

Regards,
Thomas
 
Hi Thomas,
thanks for the info, I will double check that too. I upgraded to virtio-win-0.1.172.iso (Fedora releases site) from the former 0.1.160, but things didn't change. My procedure was to upgrade all the drivers from the "Device management" Win10 panel, maybe I missed one of the VirtIO drivers.

Regards,
Marco
 
Hello,

Managed to solve the very same problem. It's an issue related to graphics driver after the 1903 update and it's a known issue.
The issue manifests after initiating an RDP session, then exiting the session without the logout of the user.

To replicate the issue:
  1. Create a Windows 10 1903 VM (CPU should be at ~0-1% idle)
  2. Access the VM via RDP connection
  3. Close the connection (CPU should be at ~20-30% idle, depending on the number of sessions and CPU settings)
Details: https://answers.microsoft.com/en-us...em-after/dbce0938-60c5-4051-81ef-468e51d743ab

The solution

As a workaround on all of my affected machines I have used Group Policy Editor to set:

Code:
Local Computer Policy
⌞ Computer Configuration
 ⌞ Administrative Templates
  ⌞ Windows Components
   ⌞ Remote Desktop Service
    ⌞ Remote Desktop Session Host
     ⌞ Remote Session Environment
      ⌞ Use WDDM graphics display driver for Remote Desktop Connections

to DISABLED

This forces RDP to use the old (and now deprecated XDDM drivers).

After the reboot, the idle should go to the normal 0-1%.
 
FYI:
pve-manager/6.1-5/9bf06119

By default Windows Server 2019 (OS type is set to Windows 10/2016/2019) starts with no-hpet and hv_relaxed,hv_synic,hv_stimer and the host's load while VM is idling is high. Disabling no-hpet (https://forum.proxmox.com/threads/high-cpu-load-for-windows-10-guests-when-idle.44531/post-213876) does help.

Update: it helped but not that much. Here are some pictures:
1. Windows Server 2019 default (no-hpet):
1579270195523.png

2. 1. Windows Server 2019 no-hpet commented out:
1579270232710.png

For comparision:
3. Windows 7 default (no-hpet):
1579270316322.png
4. Windows 7 no-hpet commented out:
1579270349386.png

Windows Server 2012 has no issues with no-hpet enabled:
1579270477991.png
 
Last edited:
@moeff , after some researching I've created one more Windows Server 2019 VM from scratch and this time used the following hardware options:
SCSI Controller: VirtIO SCSI
Hard disk: VirtIO Block
Network: e1000 (kept like in previous setup)

And it seems to perform much better.
 
I also had this issue on PVE 6.2
I realized it happens only on Windows 2016 and not in 2019 and only when I have my CPU Type to "Host" and try to distribute the cores among more than one CPU Socket.
For example, I had this setting on one of my machines:

CPU Type: Host
Socket: 2
Cores: 3
total cores available to the VM: 6

I constantly had 20-30% CPU usage overhead by System in the windows.

As soon as I change the CPU type to default KVM64, or change the socket to 1 and apply all cores from a single socket, the issue went away.
 
I have the same Problem and a solution (for me).
I post it here in the german subforum: https://forum.proxmox.com/threads/windows-10-hohe-idle-cpu-auslastung.44530/

My Win10 Guest already has ~16% Cpu Load @idle.
I changed the OS Type to "other" and now it hast ~3% idle load.
This kvm options fail with set to other:
-no-hpet
driftfix=slew
-global kvm-pit.lost_tick_policy=discard

You can test this and report back.
I am new to proxmo and have the same problem for weeks now.How did you change OS Type to other ? Can you post the solution in English plz
 
I also had this issue on PVE 6.2
I realized it happens only on Windows 2016 and not in 2019 and only when I have my CPU Type to "Host" and try to distribute the cores among more than one CPU Socket.
For example, I had this setting on one of my machines:

CPU Type: Host
Socket: 2
Cores: 3
total cores available to the VM: 6

I constantly had 20-30% CPU usage overhead by System in the windows.

As soon as I change the CPU type to default KVM64, or change the socket to 1 and apply all cores from a single socket, the issue went away.
It's 2023, running Proxmox 8.0 now.

Had the same issue, this solved it. 1 socket, changed to KVM instead of host. Voila!

Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!