Good morning,
I'm trying to get VBS working in Windows guests. This feature is working on VMware vSphere hosts, but I'm running into some issues with Proxmox. I'm able to enable it in Windows guest if I configure it with vIOMMU option set to VirtIO. However that leads to really high CPU usage, and unrecognized PCI device showing up in device manager, with no drivers available for it. Hardware id PCI\VEN_1AF4&DEV_1057&SUBSYS_11001AF4&REV_01. Vendor id comes up as Red Hat, but device id is a mystery.
If vIOMMU is set to intel, mysterious device disappears, CPU usage goes back to normal, but none of the virtualization security features, can be enabled. Going into Device Security -> Core Isolation, and enabling Memory Integrity prompts for reboot, but after reboot Memory Integrity is back in off state. All of the features listed below report as disabled:
I'm guessing high CPU usage might be due to missing driver, I don't think nested Virtualization is the problem here, but perhaps I'm wrong. On Esxi these security features are working just fine without much performance impact. This and Veeam are currently the only issues holding us back from ditching VMware.
Any ideas how to solve this? Or how to get it working with vIOMMU intel option?
Hardware in question is Dell PowerEdge r730xd, with Inter Xeon E5-2680 v4, running latest Proxmox. Esxi running on Dell PowerEdge r720xd with Intel Xeon E5-2670, and production servers are running Esxi on Dell PowerEdge r7515, AMD EPYC 7313P CPUs. Perhaps I might have more luck on production servers, but we're not ready to commit until we get these issues resolved.
I'm trying to get VBS working in Windows guests. This feature is working on VMware vSphere hosts, but I'm running into some issues with Proxmox. I'm able to enable it in Windows guest if I configure it with vIOMMU option set to VirtIO. However that leads to really high CPU usage, and unrecognized PCI device showing up in device manager, with no drivers available for it. Hardware id PCI\VEN_1AF4&DEV_1057&SUBSYS_11001AF4&REV_01. Vendor id comes up as Red Hat, but device id is a mystery.
If vIOMMU is set to intel, mysterious device disappears, CPU usage goes back to normal, but none of the virtualization security features, can be enabled. Going into Device Security -> Core Isolation, and enabling Memory Integrity prompts for reboot, but after reboot Memory Integrity is back in off state. All of the features listed below report as disabled:
Hypervisor Code Integrity | OS | Hypervisor Code Integrity: enabled | no | High |
Virtual Secure Mode (VSM) | OS | Virtual Secure Mode: available | no | High |
Input–output Memory Management Unit (IOMMU) is in use | OS | Input–output Memory Management Unit: in use | no | Moderate |
System Management Mode (SMM) Protections | OS | System Management Mode Protections: available | no | High |
Credential Guard | OS | Credential Guard: enabled | no | Moderate |
Secure Kernel | OS | Secure Kernel: running | no | High |
Memory Access Protection | OS | Memory Access Protection: enabled | no | Moderate |
Mode based execution control (MBEC) | OS | Mode-based execution control: available | no | Low |
Memory Overwrite Request Control | OS | Memory Overwrite Request Control: enabled | no | Low |
Hypervisor Code Integrity (Strict Mode) | OS | Hypervisor Code Integrity (Strict Mode): enabled | no | Low |
I'm guessing high CPU usage might be due to missing driver, I don't think nested Virtualization is the problem here, but perhaps I'm wrong. On Esxi these security features are working just fine without much performance impact. This and Veeam are currently the only issues holding us back from ditching VMware.
Any ideas how to solve this? Or how to get it working with vIOMMU intel option?
Hardware in question is Dell PowerEdge r730xd, with Inter Xeon E5-2680 v4, running latest Proxmox. Esxi running on Dell PowerEdge r720xd with Intel Xeon E5-2670, and production servers are running Esxi on Dell PowerEdge r7515, AMD EPYC 7313P CPUs. Perhaps I might have more luck on production servers, but we're not ready to commit until we get these issues resolved.