Intel vIOMMU on QEMU/KVM is not detected by Windows Server 2025 guest

regulad

New Member
Nov 23, 2025
3
0
1
Hello all. Recently, I've been trying to enable GPU-PV on nested Hyper-V VMs. My setup is as follows:

Proxmox VE 8.4.14 <- HyperV Host (Windows Server 2025) <- HyperV Guest (Windows 11)
AMD Radeon W5500 via PCIe Passthrough ↲

The Windows Server VM has the Intel vIOMMU enabled on the q35 machine type, and I have the OS type set to Windows. I checked the QEMU/KVM command dispatched by PVE and confirmed that it contained the viommu device and the Hyper-V enlightenments that setting the OS type to Windows provides.

However, when checking device manager and Windows's DMAR interface with a few PowerShell commands, I noticed that the Windows Server VM wasn't able to detect the vIOMMU device. This prevented it from allowing its nested VMs to use GPU resources via GPU-PV.

I did some troubleshooting, and isolated the problem in the Windows handling of the device as a Linux guest w/ kernel 6.8 was able to detect and utilize the Intel vIOMMU device with no issues. I also attempted to use the virtio vIOMMU, but that didn't work either. (I'm not sure why, it seems that the virtio drivers for Windows do include DMA via vIOMMU support: https://github.com/virtio-win/kvm-g...b25c8534472373e1825c4f5/VirtIO/WDF/Dma.c#L407)

After troubleshooting for a couple of days, I'm at my wits' end and am not sure what more I can try. If there is any useful information/logs/configuration files I can provide that would aid in the diagnosis of this issue, please simply ask me and I will provide it ASAP. Thank you.
 
For posterity, I'd like to note that the actual passthrough of the W5500 from the Proxmox host to the Windows Server VM is working totally fine. The CPU type of the VM is "host." I've attached the full configuration below.

Code:
agent: 1
bios: ovmf
boot: order=sata0
cores: 16
cpu: host,flags=+pdpe1gb;+aes
efidisk0: <redacted>:vm-110-disk-3,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:0c:00,pcie=1,x-vga=1
ide2: local:iso/virtio-win.iso,media=cdrom,size=709474K
machine: pc-q35-9.2+pve1,viommu=intel
memory: 16384
meta: creation-qemu=9.2.0,ctime=1756744264
name: winserver
net0: virtio=<redacted>,bridge=vmbr0
numa: 0
onboot: 1
ostype: win11
sata0: <redacted>:vm-110-disk-2,discard=on,size=200G,ssd=1
scsihw: virtio-scsi-single
 
Bumping this thread. I'm still experiencing the issue. I have a workflow that relies on GPU-PV in Hyper-V while maintaining minimal downtime, and nested virtualization is the best solution for me.