Hello all. Recently, I've been trying to enable GPU-PV on nested Hyper-V VMs. My setup is as follows:
Proxmox VE 8.4.14 <- HyperV Host (Windows Server 2025) <- HyperV Guest (Windows 11)
AMD Radeon W5500 via PCIe Passthrough ↲
The Windows Server VM has the Intel vIOMMU enabled on the q35 machine type, and I have the OS type set to Windows. I checked the QEMU/KVM command dispatched by PVE and confirmed that it contained the viommu device and the Hyper-V enlightenments that setting the OS type to Windows provides.
However, when checking device manager and Windows's DMAR interface with a few PowerShell commands, I noticed that the Windows Server VM wasn't able to detect the vIOMMU device. This prevented it from allowing its nested VMs to use GPU resources via GPU-PV.
I did some troubleshooting, and isolated the problem in the Windows handling of the device as a Linux guest w/ kernel 6.8 was able to detect and utilize the Intel vIOMMU device with no issues. I also attempted to use the virtio vIOMMU, but that didn't work either. (I'm not sure why, it seems that the virtio drivers for Windows do include DMA via vIOMMU support: https://github.com/virtio-win/kvm-g...b25c8534472373e1825c4f5/VirtIO/WDF/Dma.c#L407)
After troubleshooting for a couple of days, I'm at my wits' end and am not sure what more I can try. If there is any useful information/logs/configuration files I can provide that would aid in the diagnosis of this issue, please simply ask me and I will provide it ASAP. Thank you.
Proxmox VE 8.4.14 <- HyperV Host (Windows Server 2025) <- HyperV Guest (Windows 11)
AMD Radeon W5500 via PCIe Passthrough ↲
The Windows Server VM has the Intel vIOMMU enabled on the q35 machine type, and I have the OS type set to Windows. I checked the QEMU/KVM command dispatched by PVE and confirmed that it contained the viommu device and the Hyper-V enlightenments that setting the OS type to Windows provides.
However, when checking device manager and Windows's DMAR interface with a few PowerShell commands, I noticed that the Windows Server VM wasn't able to detect the vIOMMU device. This prevented it from allowing its nested VMs to use GPU resources via GPU-PV.
I did some troubleshooting, and isolated the problem in the Windows handling of the device as a Linux guest w/ kernel 6.8 was able to detect and utilize the Intel vIOMMU device with no issues. I also attempted to use the virtio vIOMMU, but that didn't work either. (I'm not sure why, it seems that the virtio drivers for Windows do include DMA via vIOMMU support: https://github.com/virtio-win/kvm-g...b25c8534472373e1825c4f5/VirtIO/WDF/Dma.c#L407)
After troubleshooting for a couple of days, I'm at my wits' end and am not sure what more I can try. If there is any useful information/logs/configuration files I can provide that would aid in the diagnosis of this issue, please simply ask me and I will provide it ASAP. Thank you.