hostpci pcie mode question

keeka

Well-Known Member
Dec 8, 2019
204
24
58
I have an Asrock H170 motherboard and RX 580 8GB GPU. I have tried several times over the past year or two to get GPU passthrough working reliably with this setup, using qemu directly and libvirt on Ubuntu 16.04 & 18.04.
Whilst I had some success, I usually had to reboot the host when restarting a guest. Though behaviour was not consistent.

I recently installed proxmox 6 and have been having more success.

Set up is as follows:

Host:
I disable the iGPU as the motherboard boot behaviour is weird when multi-gpu is enabled. Host CSM is turned off

I have 3 VM's: Manjaro/Arch (100), Ubuntu 18.04(101) and Windows 10 (102).
I use a vbios rom dump of my actual GPU.
GPU and the sole USB controller are passed through to each guest.
Both the Ubuntu and Win10 VM boot pre-existing, baremetal installations, each on separate storage. (One an NVME drive and the other an M2 SSD.) This works without issue for ubuntu and also windows after loading virt network driver.
I have run the Unigen benchmark on both these installations and results are almost the same baremetal vs virtualised.

The Manjaro install uses virtual storage (the default, and lvm backed raw image).

I can successfully stop and restart guests, in various sequences, without a host reboot, but with one exception: If I ever boot the manjaro install, I cannot then succesfully boot the the windows VM. It appears to run but the display does not intialise. I assume this is some incarnation of AMD reset bug. But I am only seeing it starting win10 after running the manjaro VM. This may be 'coincidence' though as I have not repeated it many times.

The pertinent VM similarities/differences are, I think:

Manjaro
hostpci0: 01:00.0;01:00.1,romfile=vbios.bin
hostpci1: 00:14 (USB3 controller)

Ubuntu18.04
hostpci0: 01:00.0;01:00.1,pcie=1,romfile=vbios.bin
hostpci1: 00:14

Win10
hostpci0: 01:00.0;01:00.1,pcie=1,romfile=vbios.bin
hostpci1: 00:14
hostpci2: 05:00.0 (NVME)

The standout difference being the Manjaro VM does not use the pcie hostpci option. However, I cannot get it to succesfully display when I do set pcie for that VM.

How might I get Manjaro guest to successfully run with pcie=1? I think this will circumvent the reset bug, for me.

Second, more minor, issue. Whilst running, say, the GPU enchmark in the ubuntu vm, the mouse becomes laggy. This does not happen in the windows vm.

Once I have things running nicely, I aim to rearrange storage and reinstall the guest OS in virtual storage.

Any suggestions gratefully received!

Thanks.
 
However, I cannot get it to succesfully display when I do set pcie for that VM.
any logs for that case? (ie connect via ssh and check syslog/journal/dmesg? )
 
I see similar backtrace as posted in the reddit thread linked above (beginning @93.694 in the xorg log excerpt).

If I boot the VM without the hostpci pcie=1 option, no such error and I get to X in passthrough display. But subsequet GPU reset issues and the guest logs amdgpu errors re missing pcie lanes. Sorry, I don't have access to the VM right now to quote those exactly.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!