GPU Passthrough - Odd Behavior

Apr 17, 2023
1
0
1
Hello folks. I'm stumped and this feels wave a rubber chicken at your PC odd.

I recently changed from VMWare ESXi to Proxmox. I had GPU passthrough working on VMware, and after following the Ultimate Beginner's Guide to Proxmox GPU Passthrough I do have GPU passthrough working with Proxmox... with some weirdness!

To get GPU passthrough to work I have to have my VM configured with the Display set to default (so I can use the noVNC console) and the "Primary GPU" checkbox unchecked on the PCIe passthrough configuration. Then I power on my VM and switch to the console in the Proxmox WebUI and let the VM boot. As soon as the VM is past UEFI post and hands things off to Windows, my display connected to the GPU will switch from the Proxmox console to Windows. If I don't do that Windows will boot fine but the only access to the VM is via RDP.

I've tried changing the host server's BIOS to use the onboard GPU instead of the PCIe GPU, but in that configuration, the Radeon GPU isn't available for passthrough. Would be ideal to get the Proxmox console via the on-board VGA output and my Windows VM via the GPU, but accessing Proxmox via SSH or the WebUI only is just fine. I'm going for making GPU passthrough work reliably without manual intervention.

But honestly, I'll accept the manual intervention for the GPU passthrough VM for the nice boost in storage performance I'm getting from switching from hardware RAID to HBA mode + ZFS. (The TS440's controller performs much better in better with the HBA firmware than RAID firmware.)

Hardware:
Lenovo ThinkServer TS440
GPU: AMD Radeon R7 200 series.
Processor: Intel(R) Xeon(R) CPU E3-1245 v3

Yes, old hardware, but it's hanging in there well for what I use it for. (Lots of storage for Plex and a Windows VM with GPU passthrough that can play Rocket Leauge).

Thank you all!
 
Hi Andrew,

First of all, next to all the tutorials floating around online, we also have a section dedicated to PCI passthrough in the docs [1] as well as a wiki article on PCI passthrough [2]. In the next couple of days these should get a bigger overhaul. There might be additional info for you there.

To get GPU passthrough to work I have to have my VM configured with the Display set to default (so I can use the noVNC console) and the "Primary GPU" checkbox unchecked on the PCIe passthrough configuration. Then I power on my VM and switch to the console in the Proxmox WebUI and let the VM boot. As soon as the VM is past UEFI post and hands things off to Windows, my display connected to the GPU will switch from the Proxmox console to Windows. If I don't do that Windows will boot fine but the only access to the VM is via RDP.
This is to be expected if you are using the GPU for your host as well as for your vm, as when passing through a PCI device it is not available for the host anymore. Can you not access the web interface after passing the GPU through anymore? You should be able to...
I've tried changing the host server's BIOS to use the onboard GPU instead of the PCIe GPU, but in that configuration, the Radeon GPU isn't available for passthrough.
When you say isn't available, what do you mean? It's not a selectable option in the web interface anymore? Is the gpu listed when you run lspci?

I am guessing your setup looks something like this: you have a VGA cable from the VGA connect on your motherboard I/O (that connects to your onboard gpu) to a monitor. You also have a HDMI or DP or something from your discrete GPU to the same monitor. What happens when you change inputs on the monitor? You don't get Windows on one and the Proxmox TTY on the other? If not, have you checked if the onboard GPU and the discrete GPU are in the same IOMMU group (see [2])?

Also, could you please share the config of your vm? You can get it by running qm config <vm-id>

Best of luck :)

[1]: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_pci_passthrough
[2]: https://wiki.intra.proxmox.com/index.php/PCI_passthrough
 
Last edited: