High VM-EXIT and Host CPU usage on idle with Windows Server 2025

@Jostein Fossheim
If you instead of cpu model host use x86-64-v3 or something similar performance will be much better.
because nested virtualization isn't available for VM.
This is the way ^^

You can also disable nested virtualization and continue to use host as cpu model and performance is good again.
This is the other way to keep host as vCPU type, and you don't want to use.
 
I've revisited VBS / credencial guard on Proxmox guests on our AMD cluster many times over the years. Have no seen much improvement on our cluster...

When set to "host" CPU type, Windows 11/2025 will automatically attempt to enable VBS with nested extensions available, which causes high idle CPU utilization and laggy interactive performance in the VM. Our windows domain has VBS/cred-guard enabled by group policy as well.

I stumbled on this yesterday while researching this...

https://williamlam.com/2023/07/vsph...eneration-2-nested-vm-running-on-amd-cpu.html

I suspect there's a similar problem going on with Proxmox, on certain hardware platforms?
 
Same here, Proxmox 8.3.5, Windows Server 2025 Datacenter with all available Windows updates, latest virtio drivers (266).

I freshly installed Win2025 on a already existent VM (Win2022 was installed before, I wiped the virtual disk), so I can easily see that there is a huge difference between 2022 and 2025.
The peak was the installation phase and reboots, in the afternoon the server was on idle.

1743093804654.png
 
This is the cost of nested virtualization used by VBS.
Easier is switch vCPU type to x86-64-v2-aes or v3 if newer physical CPU.
What is the physical host CPU model ?
Host CPU on this test machine is a i9-13900. Will check on the VBS topic.
 
This is the cost of nested virtualization used by VBS.
Easier is switch vCPU type to x86-64-v2-aes or v3 if newer physical CPU.
What is the physical host CPU model ?
Unfortunately, this isn't a valid option for modern enterprise environments. VBS and associated technologies are effectively required by contractual obligation for many businesses these days. There are security controls and configuration baselines that require it.

My understanding is that modern CPUs have nested virtualization/paging capabilities that should allow this to work with very little performance compromise. It seems to me like there's just a implementation problem/bug in KVM/QEMU preventing it from working as intended.
 
I went ahead and tried the following:
- Disabled nested virtualization using kvm-intel.conf, rebooted the server
- Set the virtual machine to use x86-64-v4 and x86-64-v2-AES

Unfortunately, that did not change the performance characteristics related to idle cpu interrupts on the Windows Server 2025 virtual machine.

1743962438194.png

msinfo32
1743962761525.png
 
@orange Did you find a fix yet?

I am having the same issue with a Windows Server 2022 VM upgraded to Windows Server 2025 on an Intel Core i7-12700 host.
It is idling at about 2-3% CPU usage vs < 1% on WS2022 with 8 cores configured as host.

in powertop, on the host, Pkg(HW) never enters any C-state when WS2025 is running. With WS2022 running, it spends about 10-20% of it's time in C2 (idle).

VBS is disabled and the WS2025 VM itself is quite snappy via RDP.
 
Have you tried as vCpu type x86-64-v2-aes ?

Yes, I forgot to mention it, as it's been referenced earlier in the thread. I tried x86-64-v2-AES and x86-64-v3. Results were the same.

I did not however try to thinker with spec-ctl / pcid flags, as it is unclear to me if I should select minus (-) or plus (+) to have an effect.
 
Just encountered the same issue after upgrading a Win 10 Pro VM to Win 11 Pro. VBS off and followed all advices from this thread. Still having idle percentage 2 to 3 times higher than before the upgrade. Inside the VM, there is virtually no CPU usage, except I am seeing more time spent with interrupts that it used to. Perhaps a virtio driver issue? Did you find any solution in the meantime?
 
I haven't found a solution yet on my side. I spent quite some time on the issue and I was able to completely disable spectre mitigation on the host (pve), in the guest VM, disable power management, make sure vbs is disabled, etc but it did not make a significant improvement. It looks like the newer kernels handles idle cores differently.

Those are my host interrupts on Windows 2022. Note that only the first vCore generates events while the VM is idle.
Usage Events/s Description
6.1 ms/s 33.8 [PID 2616656] /usr/bin/kvm -id 116 -name vm-windows2022
7.0 ms/s 0.7 [PID 2616657] /usr/bin/kvm -id 116 -name vm-windows2022
5.9 ms/s 1.1 [PID 2616661] /usr/bin/kvm -id 116 -name vm-windows2022
4.2 ms/s 1.6 [PID 2616659] /usr/bin/kvm -id 116 -name vm-windows2022

Those are interrupts on Windows 2025. It is the same base VM which has been upgraded from Windows 2022 and is idle / up to date.

Usage Events/s Description
16.8 ms/s 221.1 [PID 2622138] /usr/bin/kvm -id 110 -name vm-windows2025
13.8 ms/s 201.0 [PID 2622140] /usr/bin/kvm -id 110 -name vm-windows2025
11.8 ms/s 180.3 [PID 2622139] /usr/bin/kvm -id 110 -name vm-windows2025
11.2 ms/s 130.5 [PID 2622137] /usr/bin/kvm -id 110 -name vm-windows2025

It seems the newer kernel generates interrupts on each cores whether they're idle or not.
I tried increasing the vCore count to 8 and they also each get about 100-200 events/s.

In the end, it increases the power consumption by 2W on my host, so I stopped chasing the issue.