Virtualization based security in Windows guests

dorseymet

New Member
May 10, 2024
6
0
1
Good morning,
I'm trying to get VBS working in Windows guests. This feature is working on VMware vSphere hosts, but I'm running into some issues with Proxmox. I'm able to enable it in Windows guest if I configure it with vIOMMU option set to VirtIO. However that leads to really high CPU usage, and unrecognized PCI device showing up in device manager, with no drivers available for it. Hardware id PCI\VEN_1AF4&DEV_1057&SUBSYS_11001AF4&REV_01. Vendor id comes up as Red Hat, but device id is a mystery.

If vIOMMU is set to intel, mysterious device disappears, CPU usage goes back to normal, but none of the virtualization security features, can be enabled. Going into Device Security -> Core Isolation, and enabling Memory Integrity prompts for reboot, but after reboot Memory Integrity is back in off state. All of the features listed below report as disabled:

Hypervisor Code IntegrityOSHypervisor Code Integrity: enablednoHigh
Virtual Secure Mode (VSM)OSVirtual Secure Mode: availablenoHigh
Input–output Memory Management Unit (IOMMU) is in useOSInput–output Memory Management Unit: in usenoModerate
System Management Mode (SMM) ProtectionsOSSystem Management Mode Protections: availablenoHigh
Credential GuardOSCredential Guard: enablednoModerate
Secure KernelOSSecure Kernel: runningnoHigh
Memory Access ProtectionOSMemory Access Protection: enablednoModerate
Mode based execution control (MBEC)OSMode-based execution control: availablenoLow
Memory Overwrite Request ControlOSMemory Overwrite Request Control: enablednoLow
Hypervisor Code Integrity (Strict Mode)OSHypervisor Code Integrity (Strict Mode): enablednoLow

I'm guessing high CPU usage might be due to missing driver, I don't think nested Virtualization is the problem here, but perhaps I'm wrong. On Esxi these security features are working just fine without much performance impact. This and Veeam are currently the only issues holding us back from ditching VMware.

Any ideas how to solve this? Or how to get it working with vIOMMU intel option?

Hardware in question is Dell PowerEdge r730xd, with Inter Xeon E5-2680 v4, running latest Proxmox. Esxi running on Dell PowerEdge r720xd with Intel Xeon E5-2670, and production servers are running Esxi on Dell PowerEdge r7515, AMD EPYC 7313P CPUs. Perhaps I might have more luck on production servers, but we're not ready to commit until we get these issues resolved.
 
Did you test with vIOMMU -> VirtIO?
For Intel (AMD Compatible) you need to add the Kernel-Parameters.
 
Last edited:
Did you test with vIOMMU -> VirtIO?
For Intel (AMD Compatible) you need to add the Kernel-Parameters.
Yes, with VirtIO option CPU usage spikes and that unknown PCI device shows up in device manager. What kernel parameters do I need to add with Intel option and how would you do that with Windows guests?
 
Yes, with VirtIO option CPU usage spikes and that unknown PCI device shows up in device manager. What kernel parameters do I need to add with Intel option and how would you do that with Windows guests?

from: https://forum.proxmox.com/threads/where-to-add-intel_iommu-on-and-iommu-pt.131592/post-578296

Enabling IOMMU​

  • Access the Proxmox VE console via an external monitor or through the Shell on the web management interface
  • Type and enter: nano /etc/default/grub
  • Add intel_iommu=on to GRUB_CMDLINE_LINUX_DEFAULT=”quiet” (See the screenshot below)
  • Write Out the settings and Exit
  • Run the command update-grub to finalize changes
  • Reboot your Vault
 

from: https://forum.proxmox.com/threads/where-to-add-intel_iommu-on-and-iommu-pt.131592/post-578296

Enabling IOMMU​

  • Access the Proxmox VE console via an external monitor or through the Shell on the web management interface
  • Type and enter: nano /etc/default/grub
  • Add intel_iommu=on to GRUB_CMDLINE_LINUX_DEFAULT=”quiet” (See the screenshot below)
  • Write Out the settings and Exit
  • Run the command update-grub to finalize changes
  • Reboot your Vault
These instructions are for the host pve and I have that setup already. Shouldn’t that be setup on Windows guest (and how?), since VBS is basically nested Hyper-V? So you need to expose IOMMU from Hyper-V to Windows guest.
 
"The hypervisor is not protecting DMA because IOMMU is not present or not enabled in BIOS"

I get this message with vIOMMU set to VirtIO. My money is on that missing driver again.
 
Hello dorseymet,
on a testsetup i see the same issue: viommu set to virtio and the unknown device (VEN_1AF4:DEV_1057) shows up in the Windows device manager. In my case however i do not see the high CPU usage.
Did you have any success in resolving this issue i.e. finding the appropriate driver ? Or can we simply disable this device in Windows: what happens then at OS level ?
 
Hello dorseymet,
Did you have any success in resolving this issue i.e. finding the appropriate driver ?
No.

Or can we simply disable this device in Windows: what happens then at OS level ?
Nothing good. I had Windows servers stuck at boot with spinning circle. But I'm pushing VBS with group policy, if you set it manually, my guess would be Windows disables VBS.
 
Are there any news? I have the same issue on my system (Asus W680 ACE with Intel Core i5-14600)

Thanks!
 
Are there any news? I have the same issue on my system (Asus W680 ACE with Intel Core i5-14600)

Thanks!
Nope. Unless RedHat develops the driver for that mystery device, I don’t see how this is going to move forward. Perhaps someone that has support contract with RedHat could pressure them for solution.
 
@dorseymet I was led to your thread by Google while researching the same issue in my homelab. I did finally get it VBS working (i.e. both Credential Guard and HVCI while using the Intel vIOMMU...) through an unexpected side-effect of running Microsoft's DG-readiness PowerShell script. Let me know if you are still experiencing the issue in your OP and I will help walk you through the process that got everything working for me.
 
Last edited:
@dorseymet I was led to your thread by Google while researching the same issue in my homelab. I did finally get it VBS working (i.e. both Credential Guard and HVCI while using the Intel vIOMMU...) through an unexpected side-effect of running Microsoft's DG-readiness PowerShell script. Let me know if you are still experiencing the issue in your OP and I will help walk you through the process that got everything working for me.
@wbedard

I would be interested in your solution.

I'm currently trying to enable VBS in a Windows guest and it's not working for me.
Secure Boot is enabled and working.
These are my CPU Flags:
Code:
cpu: host,flags=+pcid;+ssbd;+spec-ctrl;+pdpe1gb;+hv-passthrough;+hv-vpindex;+hv-synic;+hv-stimer;+hv-relaxed;+hypervisor

I enabled VBS through gpedit.msc with the following options:
- Turn On Virtualization Based Security: Enable
- Select Platform Security Level: Secure Boot and DMA Protection
- Virtualization Based Protection of Code Integrity: Enabled without lock
- Require UEFI Memory Attributes Table: Check
- Credential Guard Configuration: Enabled without lock
- Secure Launch Configuration: Enabled

Virtualization based security status in msinfo32 is showing as "enabled but not running".
 
Last edited:
Hi @richii,

No problem! As I mentioned, the breakthrough for me was when I went back to the tool/script Microsoft developed to help organizations take advantage of features like Device Guard and Credential Guard (DG-Readiness Tool). Although my professional work is where I originally used this script, I more recently have just relied on Group Policy to enable/configure VBS on Windows devices. However, while reviewing the log created by the DG-Readiness script, it was obvious that it was setting a few extra registry keys that the VBS group policy wasn't (see the attached log from my run...). After rebooting my VM (as directed by the script...), sure enough, VBS was enabled (see attached the summary from SystemInfo/msinfo32...).

As far as how my setup compares to yours, I am using a Ryzen-based host CPU (Cezanne 5000 series, Zen 3 based) and just passing cpu: host,flags=+aes flags. My VBS group policy is configured similar to yours, as follows:
- Turn On Virtualization Based Security: Enable
- Select Platform Security Level: Secure Boot and DMA Protection
- Virtualization Based Protection of Code Integrity: Enabled without lock
- Require UEFI Memory Attributes Table: Check
- Credential Guard Configuration: Enabled without lock
- Secure Launch Configuration: Enabled Disabled

I wasn't able to get any add'l VBS features working and, based on my research, it's just a limitation of the features available in the OVMF/UEFI firmware. Hopefully, this should give you enough info to make some progress with your setup. I encourage you to read the README and help text associated with the DG-Readiness script to get some important usage/workflow guidance. Of course, if you encounter any issues, feel free to post them here and I will get back to you. Good Luck!
 

Attachments

  • DeviceGuardCheckLog.txt
    210.7 KB · Views: 1
  • msinfo32_summary.txt
    1.9 KB · Views: 1

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!