High VM-EXIT and Host CPU usage on idle with Windows Server 2025

@_gabriel exactly as @RoCE-geek said.


Hardware: 2x Xeon 6230R and both VMs had this config.

Code:
agent: 1
bios: ovmf
boot: order=scsi0;ide2;ide0;net0
cores: 52
cpu: Cascadelake-Server-v5,flags=+md-clear;+pcid;+spec-ctrl;+pdpe1gb;+hv-tlbflush;+hv-evmcs
efidisk0: linstor_nvme_1:pm-33087c22_114,efitype=4m,ms-cert=2023,pre-enrolled-keys=1,size=3080K
hotplug: 0
ide0: ISO-1:iso/virtio-win-0.1.285.iso,media=cdrom,size=771138K
ide2: ISO-1:iso/Win11_23H2_English_x64.iso,media=cdrom,size=6548134K
machine: pc-q35-10.1
memory: 64000
meta: creation-qemu=10.1.2,ctime=1764535929
name: win11-23h2
net0: virtio=BC:24:11:E4:D1:51,bridge=vmbr0,firewall=1
numa: 1
ostype: win11
scsi0: linstor_nvme_1:pm-a0c6d8aa_114,discard=on,iothread=1,size=159383560K,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=xxxxxxxxxxxxxxxxxxxxxxxxxxxx
sockets: 2
tpmstate0: linstor_nvme_1:pm-29d19dc3_114,size=4M,version=v2.0
vga: virtio

Win23H2 is only marginally faster (margin of error) on all cores but the single core speedup is 18% which kinda reflects the max boost frequency during the single core test run.

Xeon 6230R was able to boost up to 3.75GHz in Win11 23H2 vs 3.1GHz turbo boost on Win11 25H2.

2025-12-01_00-20.png2025-12-01_00-10.png
 
  • Like
Reactions: RoCE-geek
Just for reference, updating PVE to 9.1 / Kernel 6.17 did reduce my CPU usage for VM's, including Windows Server 2025
Node CPU: AMD Epyc 9355P, VM Processor Type: x86-64-v4, virtio-win-0.1.271

Node1 Windows Server 2025 VM CPU usage:
1764583329446.png


Node2 Windows Server 2025 VM CPU usage:
1764583501087.png
 
Just for reference, updating PVE to 9.1 / Kernel 6.17 did reduce my CPU usage for VM's, including Windows Server 2025
Node CPU: AMD Epyc 9355P, VM Processor Type: x86-64-v4, virtio-win-0.1.271

Node1 Windows Server 2025 VM CPU usage:
View attachment 93485


Node2 Windows Server 2025 VM CPU usage:
View attachment 93487
Thanks, will check the impact. What's your VM version? Was the PVE upgrade itself only one change made?
 
Here is a reference to the documentation: https://www.qemu.org/docs/master/system/i386/hyperv.html

The fallback that people noticed to other timers is desired. Yes, Windows 2025/11 instituted more timers because Windows software is broken and may not function well in VM, so this is a kernel change in Windows causing more interrupts. You can play with the above switches to see if you can get better emulation of HyperV but even there is higher CPU usage.

Do qm showcmd, see if hv_frequencies is in there and post the output here. Add hv_frequencies to args if necessary.
 
Here is a reference to the documentation: https://www.qemu.org/docs/master/system/i386/hyperv.html

The fallback that people noticed to other timers is desired. Yes, Windows 2025/11 instituted more timers because Windows software is broken and may not function well in VM, so this is a kernel change in Windows causing more interrupts. You can play with the above switches to see if you can get better emulation of HyperV but even there is higher CPU usage.

Do qm showcmd, see if hv_frequencies is in there and post the output here. Add hv_frequencies to args if necessary.
I've been experimenting with many of these flags - no positive change with hv_frequencies either. There's no (clear) solution via HVE flags.
 
No huge difference for me after updating from kernel 6.16.8 to 6.17.8 (plain Debian) unfortunately. Still the same 2-3x higher idle load and no turbo boost...