On a PVE system with the pve-qemu-kvm 11.0.0-1 package installed, virtual machines with the Hyper-V role added get stuck in a boot loop.

uzumo

Well-Known Member
Apr 5, 2025
510
145
48
I am currently testing pve-qemu-kvm 11.0.0-1 from the test repository to utilize hardware acceleration via MBEC/GMET on HVCI.


Virtual machines with the Hyper-V role added get stuck in a boot loop with the following combinations:

Code:
pve-qemu-kvm 11.0.0-1 ⁺ Linux 7.0.0-3-pve
pve-qemu-kvm 11.0.0-1 ⁺ Linux 7.0.2-1-pve

The system boots without issues in the following cases:

Code:
pve-qemu-kvm 11.0.0-1 ⁺ Linux 7.0.0-2-pve
pve-qemu-kvm 10.2.1-2 ⁺ Linux 7.0.0-3-pve
pve-qemu-kvm 10.2.1-2 ⁺ Linux 7.0.2-1-pve

A temporary workaround

Code:
apt install pve-qemu-kvm=10.2.1-2
apt-mark hold pve-qemu-kvm

It appears that there is an issue with the combination of a kernel that supports MBEC/GMET and QEMU 11.

*7.0.0-3-pve is the first kernel to which MBEC/GMET v3 was backported.
7.0.2-1-pve is a kernel with MBEC/GMET v5 backported.
Since version 7.0.0-2-pve does not include backports for MBEC/GMET, and since the issue does not occur when used in combination with that version, and since it does not occur with QEMU 10, we have determined that the issue occurs specifically when using a kernel that includes MBEC/GMET in combination with QEMU 11.

Is there anything you can provide to help us investigate this?
 
Last edited:
Hi,
thank you for the report! My colleague @driley also ran into the issue and is looking into it.
 
  • Like
Reactions: uzumo
Hi,
As @fiona pointed out I ran into the exact same issue and narrowed it down to a couple of new additions in QEMU. For me, these two options seem to have caused the boot hang:
Code:
cet-ibt,cet-ss

On Kernel 7.0.2-1-pve
I got my Windows Server (with VBS active) to boot using the following setup:
Code:
args: -cpu host,level=30,-cet-ibt,-cet-ss

CPU: Intel(R) Xeon(R) Gold 6426Y

Keep me posted if this resolves the issue for you as well
 
  • Like
Reactions: uzumo and fiona
Thank you!!

I have confirmed that nested Hyper-V starts up normally.

I was also able to enable HVCI, and benchmark results do not appear to show any performance degradation.

*We have not observed any decline in CPU performance. However, we have noticed a noticeable drop in performance with GPU passthrough.

It is a bit of a hassle to have to apply this setting to each virtual machine that has the Hyper-V role added after the update.

However, since nested Hyper-V itself is not something we would run in a production environment, I can accept having to add this setting.

*As someone who is constantly testing, nested virtualization is an essential feature for me, but as long as it doesn’t stop working entirely, this is fine.

edit

There is an issue where simply applying the pve-qemu-kvm=11.0.0-1 update prevents the system from booting. This is because, while it might be tolerable if it only affected nested Hyper-V, enabling RDS or HVCI also prevents the OS from booting.

edit2

Even with pve-qemu-kvm=11.0.0-1,
you likely won’t encounter any issues when adding the RDS or Hyper-V role up to Windows Server 2022.
However, on Windows Server 2025, adding the RDS or Hyper-V role will prevent the system from booting.
 
Last edited:
  • Like
Reactions: Johannes S