[SOLVED] Windows Guest Boot Hangs after Enabling Hyper-V in Windows Guest on 13th Gen Intel CPU

brewdamaster

Member
May 11, 2022
3
2
8
Hi, I have been banging my head against the wall trying to get a working configuration for enabling Hyper-V on a 13700K intel host.

I've referenced various CPU flags & kernel versions (5.15-6.2) to try as well as Windows 10/11/Server2019/Server2022 as OS options with no joy so far. With any configuration I use the boot hangs after enabling Hyper-V on the guest.

What are the best practices when trying a setup like this? I used a 10700K as a proof of concept of this idea an was able to enable Hyper-V and utilize GPU-P to split a passed through NVIDIA GPU across the guest and nested guest within. Now on 13th gen I do not get the same result with the same VM details and BIOS options.

I only need to pass the whole GPU via VFIO and don't have issues with code 43 on the guest, so I am just trying to get nested virtualization to work on a Windows guest with hyper-v. How can I troubleshoot why the guest hangs on the boot logo in console?

Thanks! And please let me know if I can provide any more info!
 
As a follow up, I was able to get this to work on the 13700k intel host with the following arguments in the vm conf file:

Code:
args: -cpu SandyBridge,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,+vmx
 
  • Like
Reactions: trey.b and fakezeta
Sorry for the necroposting, but I looked so hard to find a solution to this problem affecting also 12th gen that I really want to thank @brewdamaster and confirm it's working.

Performance hit anyway is huge (near 45%).
Here a Geekbench 6 comparison between host and SandyBridge options:
1702375679517.png

Using a more recent CPU model and enabling all the 12th gen supported features like below
Code:
args: -cpu Cascadelake-Server-noTSX,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,+vmx,+pdpe1gb,+md-clear,+mds-no,+taa-no,+tsx-ctrl,+spec-ctrl,+stibp,+ssbd,+pcid

mitigate the performance hit (near 10%):
1702375939041.png
 
  • Like
Reactions: trey.b
Thanks @fakezeta ! I was able to get almost double single core score on 13700k with your flags. Not quite as improved on the multicore score (about 30% improved) but still a nice performance bump!
 
I'm experimenting with other CPU model.
The most recent I could get working was Icelake-Server-v4 and performance are near "host" with HVCI off:
args: -cpu Icelake-Server-v4,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,+vmx,+pdpe1gb,+md-clear,+mds-no,+taa-no,+tsx-ctrl,+spec-ctrl,+stibp,+ssbd,+pcid
1702572301131.png

Performance with HVCI on are instead reduced of 30% but I think that we must live with performance impact on nested virtualization. :)
1702572332934.png
 
Last edited:
I was searching for days in order to get my WIndows 10 Pro Desktop VM back online again (while keeping the host cpu) after enabling Core Isolation (aka Virtualization-based Security) without doing a snapshot in advance. And then I luckily found this link. It turned out that getting the nested Hyper-V and WSL running on my Intel i7-13700T Proxmox VE 8.1 host was just a matter of adding
Code:
args: -cpu host,level=30
in the VM config file.

Afterwards the VM instantly booted again but still felt a bit slow and caused its assigned vcpus to constantly run at high load. After carefully reading the QEMU documentation for Hyper-V Enlightenments for the hv_* flags, I enabled everything that promised some performance gain:
Geekbench.png

From this I would recommend to use somthing like the following custom arguments for the VM config for such a processor:
Code:
args: -cpu host,level=30,hv_relaxed,hv_reset,hv_runtime,hv_time,hv_spinlocks=0x1fff,hv_vapic,hv_vpindex,hv_ipi,hv_synic,hv_stimer,hv_apicv,hv_xmm_input,hv_stimer_direct,hv_frequencies,hv_reenlightenment,hv_evmcs,hv_emsr_bitmap,hv_tlbflush,hv_tlbflush_ext,hv_tlbflush_direct

Please note that the hv_evmcs flag is Intel-only and some of the others might only be supported on AlderLake/RaptorLake processors and newer.

But using passthrough also seems to work fine and is much simpler and less error prone:
Code:
args: -cpu host,hv_passthrough,level=30

Guess this would also allow to use Credential Guard with WIndows 10 Enterprise or Windows 11 (sorry for the German screenshots):
msinfo32.png

Windows Subsystem for Linux (WSL) also works without any issues:
WSL.png
Geekbench-WSL.png
Geekbench-WSL2.png

To get Windows 11 Pro started, it was necessary to add the -waitpkg flag (same link as above) and use the following arguments:
Code:
args: -cpu host,hv_passthrough,level=30,-waitpkg
It was also essential that MSRs are ignored according to the nested docs. Without the two adjustments, the test machine did not boot or ended up with a BSOD.

For some reason, Windows 11 works much faster than Windows 10 and gives close to bare metal results (also in WSL) with Hyper-V and VBS enabled:
Geekbench-Win11.png
MSINFO32.png

For the benchmarks I was only using 6 P-cores (x2) while the other 2 P-cores were dedicated to the PVE host.
All 8 E-cores have been used exclusively for 6 background VMs (nextcloud, omv, dc, ...) that were more or less idling during the tests.

Thanks to @brewdamaster and @fakezeta for this thread for search inspirations!

Hope this is helpful.
 
Last edited:
Wow i think this just solved my automatic repair boot loop issue args: -cpu host,hv_passthrough,level=30,-waitpkg

thanks @p.b

now if only i could work out how to re-enable the vGPU passthrough and not get code 43 on the IrisXE vGPU, hope i don't need the V2 extended topology feature for that.....
 
Last edited:
  • Like
Reactions: celebrir and trey.b
I think this resolved our issue, we just got 5th generation Xeon R760 servers and to my surprise we couldn't boot with host CPU type, which is required to run WSL/nested virtualization. Besides the fact virtualized CPUs have awful performance in some areas.

After using the suggested -cpu host,hv_pass... suggestion we can boot and "wsl --install -d Ubuntu-20.04" completes post reboot whereas before it would error about BIOS needing virtualization support.

Thank you

I don't understand why Windows is so weird about drivers.
 
Well, this was a huge find. I ignorantly installed WSL on Win11 yesterday without doing research, rebooted, and found myself in recovery mode. Attempted to search but so many false paths and old information. I managed to find this this morning and the "Win11 Args" booted me right in!

Thanks to everyone who contributed to this thread.
 
To get Windows 11 Pro started, it was necessary to add the -waitpkg flag (same link as above) and use the following arguments:
Code:
args: -cpu host,hv_passthrough,level=30,-waitpkg
It was also essential that MSRs are ignored according to the nested docs. Without the two adjustments, the test machine did not boot or ended up with a BSOD.
Are there driver updates for it as well? It looks like my i7-13700K is showing up with a basic Microsoft driver from 4/21/2009 version "10.0.22621.3374", and I can only find drivers online for the igpu.
 
Are there driver updates for it as well? It looks like my i7-13700K is showing up with a basic Microsoft driver from 4/21/2009 version "10.0.22621.3374", and I can only find drivers online for the igpu.

This is normal. CPU's don't have drivers. My 13th gen shows up with the same 2009 driver in Windows.
 
Just wanted to give a huge thanks to @brewdamaster and the others here. A few months ago, I migrated my Proxmox cluster from some old Xeon servers to more power/noise/size friendly i9 and i7 boxes. Everything was smooth EXCEPT my Proxmox VMs running Hyper-V servers. On my old Xeon servers, I didn't do anything special, just set the VM CPU to host and Hyper-V worked just fine.

I was having a heck of a time trying dozens of different flags and CPU models. Nothing I did was able to solve the issue and I'd nearly given up on running Hyper-V servers on the new cluster.

The Proxmox wiki page on nested virtualization was useless, since none of the instructions in there were able to make a working Hyper-V server.

This thread was the first information found, that works. After dozens of searches, this thread finally got me sorted out. I now have a fully functional AzureStack HCI cluster running on Intel i9 processors.

Thank you!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!