Hi everyone,
I am encountering issues with GPU passthrough in a Windows 11 VM.
My build:
- Ryzen 9 9950x (The integrated graphics is assigned to proxmox)
- Gigabyte x870 Gaming Wifi 6
- NVIDIA RTX 5070ti
UEFI settings:
- Resizable BAR: disabled
- Above 4G decoding: disabled
- CSM: disabled
- IOMMU: enabled
GRUB config:
For anyone suggesting that I am missing
/etc/modprobe.d/vfio.conf
/etc/modprobe.d/pve-blacklist.conf
VM config:
Basically, I set up IOMMU correctly and I can pass the GPU to the VM. Then I configured the basic stuff (RDP, VirtIO drivers for the NIC, ...).
The issues arise when I try to download the nvidia drivers: at first the GPU seems to get recognized correctly (no code 43), but when I reboot it, it gets stuck on the loading screen (see image) and sometimes windows goes into recovery.
Further tests revealed that the issues start as soon as the drivers finish installing. I discovered this because I tried not to connect through RDP but with an HDMI cable to a physical monitor and there I could (sometimes) get video output until the drivers finished installing, then I could only see a black screen.
I already tried multiple "solutions" which did not work: changing the GRUB config, recreating the VM, removing the drivers through DDU and then reinstalling them even using NVCleanstall to check the MSI flag, but still no luck.
No errors in dmesg either, the only weird thing there (or at least I think it is) is that whenever I start the VM, it writes:
Moreover, right after installing the driver (even before rebooting) windows in general becomes highly unstable, taking a lot of time to load apps and not registering clicks.
Plus, if I disconnect from RDP after the driver is installed and I try to reconnect, it prompts me to enter the credentials, but then fails to connect after a minute of loading.
This is why I think this isn't inherently a GPU passthrough problem, but more so a driver/compatibility one.
I still haven't tried passing through another card, however I have a 2060 super laying around from another computer that I could try installing on my current build to see if I can get that to work, but first I wanted to rule out every possible configuration problem.
Can someone help me?
I am encountering issues with GPU passthrough in a Windows 11 VM.
My build:
- Ryzen 9 9950x (The integrated graphics is assigned to proxmox)
- Gigabyte x870 Gaming Wifi 6
- NVIDIA RTX 5070ti
UEFI settings:
- Resizable BAR: disabled
- Above 4G decoding: disabled
- CSM: disabled
- IOMMU: enabled
GRUB config:
GRUB_CMDLINE_LINUX_DEFAULT="quiet nomodeset initcall_blacklist=sysfb_init pcie_acs_override=downstream"
For anyone suggesting that I am missing
amd_iommu=on iommu=pt
please keep in mind that these parameters have been deprecated and are no longer valid (but invalid parameters are ignored so it doesn't matter)./etc/modprobe.d/vfio.conf
options vfio-pci ids=10de:2c05,10de:22e9 disable_vga=1
/etc/modprobe.d/pve-blacklist.conf
blacklist nouveau
blacklist nvidia
blacklist nvidiafb
blacklist snd_hda_intel
VM config:
args: -cpu host,-hypervisor,kvm=off, -smbios type=0,vendor="American Megatrends International",version=F5,date="03/12/2025"
bios: ovmf
boot: order=scsi0;net0
cores: 16
cpu: host,hidden=1
efidisk0: vms:vm-110-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
hostpci0: 0000:01:00,pcie=1,x-vga=1
machine: pc-q35-9.2+pve1
memory: 32768
meta: creation-qemu=9.2.0,ctime=1745758477
name: Win11-gaming
net0: virtio=BC:24:11:B93:58,bridge=vmbr0,firewall=1
ostype: win11
scsi0: vms:vm-110-disk-1,iothread=1,size=150G
scsihw: virtio-scsi-single
smbios1: uuid=67dbbf21-678b-45dd-a5d4-94f1d3ac6487
sockets: 1
tpmstate0: vms:vm-110-disk-2,size=4M,version=v2.0
vga: virtio
vmgenid: c9eef372-e262-4aa4-83bf-b78268cda3a8
Basically, I set up IOMMU correctly and I can pass the GPU to the VM. Then I configured the basic stuff (RDP, VirtIO drivers for the NIC, ...).
The issues arise when I try to download the nvidia drivers: at first the GPU seems to get recognized correctly (no code 43), but when I reboot it, it gets stuck on the loading screen (see image) and sometimes windows goes into recovery.
Further tests revealed that the issues start as soon as the drivers finish installing. I discovered this because I tried not to connect through RDP but with an HDMI cable to a physical monitor and there I could (sometimes) get video output until the drivers finished installing, then I could only see a black screen.
I already tried multiple "solutions" which did not work: changing the GRUB config, recreating the VM, removing the drivers through DDU and then reinstalling them even using NVCleanstall to check the MSI flag, but still no luck.
No errors in dmesg either, the only weird thing there (or at least I think it is) is that whenever I start the VM, it writes:
I find it weird that it is enabling it twice.vfio-pci 0000:01:00.0: Enabling HDA controller
vfio-pci 0000:01:00.0: Enabling HDA controller
Moreover, right after installing the driver (even before rebooting) windows in general becomes highly unstable, taking a lot of time to load apps and not registering clicks.
Plus, if I disconnect from RDP after the driver is installed and I try to reconnect, it prompts me to enter the credentials, but then fails to connect after a minute of loading.
This is why I think this isn't inherently a GPU passthrough problem, but more so a driver/compatibility one.
I still haven't tried passing through another card, however I have a 2060 super laying around from another computer that I could try installing on my current build to see if I can get that to work, but first I wanted to rule out every possible configuration problem.
Can someone help me?
Attachments
Last edited: