[Virtual Machine Config] Windows 11 Pro Memory Integrity: Does it require nested virtualization?

Sep 1, 2022
518
194
53
42
My Windows 11 Pro VM wants me to enable Core Isolation for enhanced security.
Source: https://support.microsoft.com/en-us...e57-b1c5-599f-3a4c6a61c5e2#bkmk_coreisolation

Secured-core PC
A Secured-core PC is designed to provide advanced security features right out of the box. These PCs integrate hardware, firmware, and software to offer robust protection against sophisticated threats.

In the Windows Security app on your PC, select Device security > Security details.

For more information, see Windows 11 Secured-core PCs.

Core isolation
Core isolation provides security features designed to protect core processes of Windows from malicious software by isolating them in memory. It does this by running those core processes in a virtualized environment.


Memory integrity
Memory integrity, also known as Hypervisor-protected Code Integrity (HVCI) is a Windows security feature that makes it difficult for malicious programs to use low-level drivers to hijack your PC.

A driver is a piece of software that lets the operating system (Windows in this case) and a device (like a keyboard or a webcam) talk to each other. When the device wants Windows to do something, it uses the driver to send that request.

Memory integrity works by creating an isolated environment using hardware virtualization.

Think of it like a security guard inside a locked booth. This isolated environment (the locked booth in our analogy) prevents the memory integrity feature from being tampered with by an attacker. A program that wants to run a piece of code which may be dangerous has to pass the code to memory integrity inside that virtual booth so that it can be verified. When memory integrity is comfortable that the code is safe it hands the code back to Windows to run. Typically, this happens very quickly.

Without memory integrity running, the security guard stands right out in the open where it's much easier for an attacker to interfere with or sabotage the guard, making it easier for malicious code to sneak past and cause problems.

Does anyone running a Windows 11 VM know if these security features require Proxmox to have nested virtualization enabled for the VM?

Thanks!
 
I don't think so but it only works when your vm has the cpu set to "host" and the host cpu is officially supported by windows 11. Just be aware that enabling core isolation and / or virtualization based security has sometimes a hefty performance impact, depending on your hardware.
 
  • Like
Reactions: SInisterPisces
I don't think so but it only works when your vm has the cpu set to "host" and the host cpu is officially supported by windows 11. Just be aware that enabling core isolation and / or virtualization based security has sometimes a hefty performance impact, depending on your hardware.
Thanks!

Any suggestions on where I can read more about the performance implications? I was considering turning it on for a VM I use for Office work, so unless it's going to torpedo Microsoft Office performance, I'm guessing it would be okay.

OTOH, I hate to yolo things. :P
 
Last edited:
There are some threads in this forum about this
https://forum.proxmox.com/threads/t...-of-windows-when-the-cpu-type-is-host.163114/
(German, Windows Server) https://forum.proxmox.com/threads/proxmox-und-windows-server-2025-vms-auf-dl380-gen10.168153/

It's not clear (at least to me) what settings to use for good performance if you want to use vbs / core isolation

As far as i know "host" is needed otherwise you can't activate vbs / core isolation, otherwise it gets deactivated on reboot?
Try with at least x86-64-v3 or even better with the more specific type for your cpu, like EPYC-Genoa-v1 if you have an AMD Epyc Genoa
Try to use the highest type possible so your VM can use the newest cpu functions that are available which may help performance
On AMD Epyc someone suggested to disable c-states in bios (pve host) and remove the TPM (Windows Server 2025) on the VM but that's not an option for Windows 11 ...

Here is a list of all cpu types supported by qemu, maybe not all are available in pve for use

Create a backup of your VM before doing anything, maybe even clone it and test around on the clone with the different cpu types.
Maybe the performance hit is not that bad and you barely notice it?
 
Some notes on this:
  1. You don't need "host" CPU type.
  2. Memory Isolation uses Hyper-V to run critical processes inside a virtualized environment. Basically, right now it requires nested virtualization.
  3. As you may know, nested virtualization can be slow and also depends on the vendor. For example, AMD support for nested virtualization was added just recently and is still not as mature as Intel's.
  4. To minimize the effect, you can enable special parameters to improve performance. The hv-evmcs option inside the Proxmox's GUI will help, but only for Intel CPUs. I recommend manually editing /etc/pve/qemu-server config file for your VM and adding this:
    Code:
    args: -cpu Skylake-Client-v4,vmx,hv-passthrough
    This will enable nested virtualization and all available Hyper-V enlightenments.
  5. You could set up huge 1 GB pages for your host and advise the guest to use them with pdpe1gb, but in my tests the difference is negligible.
  6. Microsoft is actively working to improve the situation. In recently released QEMU 10.2, they added the MSHV accelerator, which basically allows running Hyper-V VMs without using nested virtualization. I haven't tested it yet, but it seems we should be able to enable Memory Isolation without performance penalties.

Eagerly await QEMU 10.2 in the Proxmox pve-test repo.
 
Some notes on this:
  1. You don't need "host" CPU type.
  2. Memory Isolation uses Hyper-V to run critical processes inside a virtualized environment. Basically, right now it requires nested virtualization.
  3. As you may know, nested virtualization can be slow and also depends on the vendor. For example, AMD support for nested virtualization was added just recently and is still not as mature as Intel's.
  4. To minimize the effect, you can enable special parameters to improve performance. The hv-evmcs option inside the Proxmox's GUI will help, but only for Intel CPUs. I recommend manually editing /etc/pve/qemu-server config file for your VM and adding this:
    Code:
    args: -cpu Skylake-Client-v4,vmx,hv-passthrough
    This will enable nested virtualization and all available Hyper-V enlightenments.
  5. You could set up huge 1 GB pages for your host and advise the guest to use them with pdpe1gb, but in my tests the difference is negligible.
  6. Microsoft is actively working to improve the situation. In recently released QEMU 10.2, they added the MSHV accelerator, which basically allows running Hyper-V VMs without using nested virtualization. I haven't tested it yet, but it seems we should be able to enable Memory Isolation without performance penalties.

Eagerly await QEMU 10.2 in the Proxmox pve-test repo.
it is now in the test repo.

there is a bug that shows slow performance on the web dashboard but apparently the performance is not actually slow, as seen in the thread below.

https://forum.proxmox.com/threads/applying-pve-qemu-kvm-10-2-1-1-may-cause-extremely-high-“i-o-delay”-and-extremely-high-“i-o-pressure-stalls”-patches-in-the-test-repository.182186/page-2
 
Last edited:
Unfortunately I was wrong in my assumptions about mshv accelerator. Currently It's only intended for Hyper-V based environments to avoid L2 nesting virtualization overhead in case of Linux guests using QEMU.
 
Last edited:
  • Like
Reactions: bitranox
Unfortunately I was wrong in my assumptions about mshv accelerator. Currently It's only intended for Hyper-V based environments to avoid L2 nesting virtualization overhead in case of Linux guests using QEMU.
Why do you think it would not be helpful ? According to my understanding(which I might be wrong), Microsoft has developed a new qemu accelerator that runs on Linux hosts to accelerate Win guests. I have installed qemu 10.2 from pve-test. But I have noticed that mshv kernel module is not compiled into pve kernel. So I start looking how can I get it compiled.

https://fosdem.org/2026/events/atta...r-in-qemu/slides/266752/mshv_qemu_2gzauv1.pdf

If I understand this correctly, instead of using kvm accelerator we would need to use mshv accelerator when running qemu. There are still a few limitations but every kernel release is getting more features.

What is your opinion ? Or someone else ?
 
According to my understanding(which I might be wrong), Microsoft has developed a new qemu accelerator that runs on Linux hosts to accelerate Win guests.

Edit : following will not work on bare metal. It would only work on Microsoft Hyper-V. I was wrong.


This is Microsoft's effort to make QEMU run natively on top of the Microsoft Hypervisor (as used in Azure Linux root partitions), similar to how QEMU uses KVM on standard Linux. It's particularly relevant for running QEMU-based VMs on Azure infrastructure or on any Linux system where MSHV is the underlying hypervisor instead (or maybe parallel ?) of KVM.

MSHV (/dev/mshv) is not something you install on bare metal the same way you install KVM. The Microsoft Hypervisor is a Type 1 hypervisor, and the MSHV kernel driver exposes an IOCTL interface via /dev/mshv, but only when Linux is actually running as the root partition on top of the Microsoft Hypervisor GitHub (e.g., on Azure infrastructure, or a machine where the MS hypervisor is the underlying firmware-level hypervisor).
MSHV only makes practical sense on Azure or dedicated Hyper-V root partition hardware.

Though you can build a custom kernel with HYPERV and try to run proxmox on that kernel (good luck) :

Code:
# Install kernel build deps
sudo apt install -y build-essential libncurses-dev bison flex \
    libssl-dev libelf-dev bc dwarves fakeroot

# Clone Microsoft's MSHV kernel tree
git clone https://github.com/microsoft/OHCL-Linux-Kernel.git
cd OHCL-Linux-Kernel

# Use the MSHV-enabled config
cp config-mshv-builtin .config

# Enable the key option
scripts/config --enable CONFIG_HYPERV_ROOT_API

# Build kernel as a Debian package
make -j$(nproc) bindeb-pkg

# Install
sudo dpkg -i ../linux-image-*.deb
sudo reboot

After reboot, check: ls /dev/mshv

let me know Your progress - I have only one usecase that would benefit from that, but doubt that it is worth the administrative burden. As long as MSHV is not integrated per default in the Proxmox Kernel I would not use it in production. But as a research project I would be interested.
 
Last edited:
I don't know how it's supposed to work if you run Proxmox on a bare-metal setup.
You are right - that will not work. I am wrong.

The Microsoft Hypervisor is not a standalone installable component. It exists in two forms:
Embedded in Windows - enabled when you turn on the Hyper-V role in Windows Server or Windows 11. The hypervisor binary loads at boot via UEFI before Windows itself.
Running internally at Azure - Microsoft runs it on their datacenter hardware as the firmware-level hypervisor, with Linux as the root partition on top.

Researchers who tried to extract the actual hypervisor binary found no way to separate it from a Hyper-V Server Core installation. References to the actual codebase are sparse, and there is no public documentation about how it is connected to the Windows boot process.
 
Last edited:
This is Microsoft's effort to make QEMU run natively on top of the Microsoft Hypervisor (as used in Azure Linux root partitions), similar to how QEMU uses KVM on standard Linux. It's particularly relevant for running QEMU-based VMs on Azure infrastructure or on any Linux system where MSHV is the underlying hypervisor instead (or maybe parallel ?) of KVM.

MSHV (/dev/mshv) is not something you install on bare metal the same way you install KVM. The Microsoft Hypervisor is a Type 1 hypervisor, and the MSHV kernel driver exposes an IOCTL interface via /dev/mshv, but only when Linux is actually running as the root partition on top of the Microsoft Hypervisor GitHub (e.g., on Azure infrastructure, or a machine where the MS hypervisor is the underlying firmware-level hypervisor).
MSHV only makes practical sense on Azure or dedicated Hyper-V root partition hardware.

Though you can build a custom kernel with HYPERV and try to run proxmox on that kernel (good luck) :

Code:
# Install kernel build deps
sudo apt install -y build-essential libncurses-dev bison flex \
    libssl-dev libelf-dev bc dwarves fakeroot

# Clone Microsoft's MSHV kernel tree
git clone https://github.com/microsoft/OHCL-Linux-Kernel.git
cd OHCL-Linux-Kernel

# Use the MSHV-enabled config
cp config-mshv-builtin .config

# Enable the key option
scripts/config --enable CONFIG_HYPERV_ROOT_API

# Build kernel as a Debian package
make -j$(nproc) bindeb-pkg

# Install
sudo dpkg -i ../linux-image-*.deb
sudo reboot

After reboot, check: ls /dev/mshv

let me know Your progress - I have only one usecase that would benefit from that, but doubt that it is worth the administrative burden. As long as MSHV is not integrated per default in the Proxmox Kernel I would not use it in production. But as a research project I would be interested.
Thank you very much for the explanation. Now it makes a bit more sense. It seems this is really useful for Microsoft as they do not want to give up on Hyper-V as the hypervisor in Azure.

In this case, use of mshv as the accelerator is very limited outside of their "bubble".

I was thinking how could I resolve nested virtualization overhead(VBS) inside of Win11 or WIn server 2025. But it does not like to be useful.

I noticed that new Ubuntu LTS release will have mshv built-in. It will be released this month.
 
But there are other options, like OpenVMM and in the future OpenHCL:

OpenVMM is a Type-2 VM written in Rust. On Linux it runs on top of KVM, so it uses /dev/kvm as its acceleration backend, not MSHV. It's essentially an alternative to QEMU, but using Microsoft's Hyper-V device model (VMBus, storvsp, netvsp) instead of virtio/QEMU devices.
Microsoft itself warns that OpenVMM on the host is not yet ready to run end-user workloads, and should be treated as a development platform for implementing new OpenVMM features rather than a ready-to-deploy application.
So on Proxmox, OpenVMM would: use KVM underneath (same as QEMU), run VMs with Hyper-V compatible devices, but have zero integration with Proxmox's UI, storage, or networking stack. (I am a console guy anyway)

That can be already run on proxmox as a standalone binary, but with caveats :

Code:
# Install Rust (OpenVMM is Rust-only, no prebuilt packages)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env

# Install build dependencies
apt install -y git build-essential protobuf-compiler \
    pkg-config libssl-dev clang

git clone https://github.com/microsoft/openvmm.git
cd openvmm

# Fetch required packages (UEFI firmware, test kernels, protoc, etc.)
cargo xflowey restore-packages

# Build OpenVMM
cargo build --release -p openvmm

Run a VM (example: Alpine Linux) OpenVMM uses a CLI interface quite different from QEMU:

Code:
# Basic VM with KVM backend, using OpenVMM's bundled UEFI
./target/release/openvmm \
    --backend kvm \
    --memory 1024 \
    --processors 2 \
    --disk path=alpine.img \
    --nic consomme   # built-in NAT networking


The one realistic use case today: running Windows VMs with native Hyper-V devices without needing virtio drivers.
Since OpenVMM uses the Hyper-V device model natively, Windows guests see it as a Hyper-V environment and need no extra driver installation.
But again, it's not production-grade yet.
The more interesting future direction is OpenHCL: Microsoft's open-source paravisor that runs OpenVMM inside the guest at a higher privilege level. Microsoft has stated they are looking forward to developing OpenHCL support for KVM as a host, in collaboration with cloud providers and the Linux/KVM community. That would eventually bring confidential VM features to KVM-based hypervisors like Proxmox, but that's still future work.
Bottom line: For your Proxmox homelab, there's limited benefit to running OpenVMM today over QEMU/KVM.
It's worth watching for when OpenHCL/KVM support matures.

Anyway, I might give it a shot - long easter weekend is coming ....
 
  • Like
Reactions: robertlukan
For me, the main problem with QEMU is not the need for virtio drivers but poor performance with nested virtualization (in order to use Hyper-V memory isolation). I don't think OpenVMM solves that problem. /dev/mshv doesn't automagically solve that issue either; it needs to be run inside a Hyper-V environment where it can communicate with the L0 hypervisor.
 
  • Like
Reactions: Johannes S
For me, the main problem with QEMU is not the need for virtio drivers but poor performance with nested virtualization
same here - I need Hyper-V enabled for my private github runners - I will give it a try and compare performance. there ARE chances that it performs differently, because in that case windows dont need virtio drivers, but sees underlying hyper-v infrastructure
 
I put together some manual how to run openvmm on proxmox :

https://github.com/bitranox/proxmox_openvmm

What was archived so far :
- compile openvmm
- crate an Alpine VM using openvmm instead of qemu
- on a sparse ZFS image
- with network access via ssh
- console access via TMUX
- finally You get a linux vm on proxmox which uses openvmm on top of KVM instead of qemu, providing hyperv-compatible devices
Windows11 is running fine, without virtIo drivers. It sees hyper-v devices !

happy hacking !
 
Last edited:
  • Like
Reactions: robertlukan