GPU Passthrough success but monitor unable to detect GPU

twentytwo

New Member
Jan 18, 2020
28
0
1
37
Australia
Hello,

I successfully complete the GPU passthrough guide and setup Windows 10 and remotely connect to it without issue. When I tried to connect my Nvidia Quadro 5200 to three monitors, all three unable to detect the card. Could anyone point me the right direction to how to get the monitors to detect the GPU?

My motherboard is Gigabye z390 Pro Wifi and cpu is Intel i7-8086K.

Extracted from configuration file

agent: 1
args: -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off'
bios: ovmf
boot: cdn
bootdisk: scsi0
cores: 4
cpu: host,hidden=1,flags=+pcid
efidisk0: local-zfs:vm-100-disk-1,size=128K
hostpci0: 01:00,pcie=1
ide2: none,media=cdrom
machine: q35
memory: 15000
name: Win10A
net0: e1000=B6:63:B0:AA:86:BA,bridge=vmbr0,firewall=1
numa: 1
ostype: win10
scsi0: local-zfs:vm-100-disk-0,cache=writeback,iothread=1,replicate=0,size=250G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=9ea625cf-4c9a-4285-a8e2-da54157cc0df
sockets: 1
usb0: host=1-13.1
usb1: host=1-13.4
usb2: host=2-9,usb3=1
usb3: host=2-10,usb3=1
vga: none
vmgenid: e2611f78-0182-4b20-9cec-ec9eafed2fd0
 
Does the device show up in Device Manager in the guest VM? Have you installed the correct NVIDIA driver?
 
The guide you posted seems fine after a quick look. If you go to the details tab for your GPU in Device Manager, does it show any errors?

Also, does the card work the way you want if you boot Windows bare-metal?
 
I checked the Device Manager and indeed yes the error appear for the GPU, the error code is 43. The GPU works fine when I had bare metal Windows 10. Do you have any suggestion whether it the files I need to modify to get this working?
 
Despite what many people claim online, Error 43 is just a generic error, where the driver doesn't know what went wrong. There's many different things that could have gone wrong, NVIDIA's driver isn't very verbose.

As a first step, try reinstalling the driver, maybe also try slightly older versions. Also, play around with the "PCIe", "Primary GPU" and "ROM BAR" options on the hostpci device in the GUI.

You can also try dumping your GPU ROM in a known good/clean state and passing it as a romfile= parameter in your <vmid>.conf, see here for more on that and some general troubleshooting tips.
 
I fiddled the GPU settings and produced nill results so I purchased AMD Sapphire Pulse r580 and installed as first slot on the motherboard. Created fresh VM with Window 10 installation and added the PCI device AMD GPU (All Functions and PCI Express ticked) and set display to none. Boot the VM and the monitor picked up the signal successfully. As I read somewhere on the forum that two different GPUs is require for this GPU passthrough setup to be functional, which I seem to ignored.

Secondly I created another VM with Window 10 installed and assigned Nvidia Quadro K5200 and the second display pick up the signal so that is another success . I installed the drivers for each GPU without problem and checked Device Manager with no error to be seen.

I now on the cross road figuring out why one VM can start but not two. The error indicate that the device or resource are busy so I copied the error code for anyone who could help me pinpoint a solution how can I start two VM at the same time. Let me know if I need to provide further information to assistance the problem solving process.

kvm: -device vfio-pci,host=0000:01:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,rombar=0,multifunction=on: vfio 0000:01:00.0: failed to open /dev/vfio/1: Device or resource busy
TASK ERROR: start failed: command '/usr/bin/kvm -id 100 -name GamingWin10 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/100.pid -daemonize -smbios 'type=1,uuid=3eeb2071-d890-4d4d-adb7-63a18d6330a8' -drive 'if=pflash,unit=0,format=raw,readonly,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,file=/dev/zvol/rpool/data/vm-100-disk-1' -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga none -nographic -no-hpet -cpu 'host,+pcid,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer,hv_ipi,kvm=off' -m 15000 -object 'iothread,id=iothread-virtioscsi0' -device 'vmgenid,guid=837a0cac-fb90-4ffd-9046-6b65298c22ea' -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg -device 'nec-usb-xhci,id=xhci,bus=pci.1,addr=0x1b' -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' -device 'vfio-pci,host=0000:01:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,rombar=0,multifunction=on' -device 'vfio-pci,host=0000:01:00.1,id=hostpci0.1,bus=ich9-pcie-port-1,addr=0x0.1' -device 'usb-host,bus=xhci.0,hostbus=1,hostport=13.1,id=usb0' -device 'usb-host,bus=xhci.0,hostbus=1,hostport=13.4,id=usb1' -device 'usb-host,bus=xhci.0,hostbus=1,hostport=13.3,id=usb2' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:8b8d69156ff' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -device 'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1,iothread=iothread-virtioscsi0' -drive 'file=/dev/zvol/rpool/data/vm-100-disk-0,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,rotation_rate=1,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' -device 'e1000,mac=22:77:4C:D9:9E:49,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime' -machine 'type=q35+pve1' -global 'kvm-pit.lost_tick_policy=discard'' failed: exit code 1
 
this is not something you can create or influence, this depends on how the mainboard splits the pcie slots.... (sometimes there is a bios update or setting that helps)
 
Some time ago I stumbled over a guide showing group separation for IOMMU, but I didn't manage to make it work: https://www.reddit.com/r/homelab/comments/b5xpua/the_ultimate_beginners_guide_to_gpu_passthrough/

According to that guide the GRUB line would look something like this:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:eek:ff,efifb:eek:ff"

As a solution to my NV GPU passthrough problem I installed ARCH on a second SSD of my machine, so NV proprietary drivers on HW for second-boot :-| And as my main PVE disk is encrypted, proprietary drivers installed on the second boot distro wouldn't be able to read any of my main disk's data :)
 
Hi Mike, I've looked at the linked guide and modified the grub file with nill success result, it could be the CPU that prevent both VM to start or something else which I would do further thinking and online searching.
 
Hi Mike, I've looked at the linked guide and modified the grub file with nill success result, it could be the CPU that prevent both VM to start or something else which I would do further thinking and online searching.
thats because it was written wrong in the tutorial you should type this

nano /etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:off video=efifb:off

I did the same as you and had to dig to find out why my VM was not working and when i enable the option why it wasnt booting sometimes

i changed the code ing /etc/default/grub
video=vesafb:off,efifb:off
to
video=vesafb:off,efifb:off

I am going to edit the reddit post as I stumbled upon this myself after hours of testing and reinstalling. It could put some people that arent very familiar with proxmox or the server space to not wanting to use a free alternative to Unraid.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!