I have an Asrock H170 motherboard and RX 580 8GB GPU. I have tried several times over the past year or two to get GPU passthrough working reliably with this setup, using qemu directly and libvirt on Ubuntu 16.04 & 18.04.
Whilst I had some success, I usually had to reboot the host when restarting a guest. Though behaviour was not consistent.
I recently installed proxmox 6 and have been having more success.
Set up is as follows:
Host:
I disable the iGPU as the motherboard boot behaviour is weird when multi-gpu is enabled. Host CSM is turned off
I have 3 VM's: Manjaro/Arch (100), Ubuntu 18.04(101) and Windows 10 (102).
I use a vbios rom dump of my actual GPU.
GPU and the sole USB controller are passed through to each guest.
Both the Ubuntu and Win10 VM boot pre-existing, baremetal installations, each on separate storage. (One an NVME drive and the other an M2 SSD.) This works without issue for ubuntu and also windows after loading virt network driver.
I have run the Unigen benchmark on both these installations and results are almost the same baremetal vs virtualised.
The Manjaro install uses virtual storage (the default, and lvm backed raw image).
I can successfully stop and restart guests, in various sequences, without a host reboot, but with one exception: If I ever boot the manjaro install, I cannot then succesfully boot the the windows VM. It appears to run but the display does not intialise. I assume this is some incarnation of AMD reset bug. But I am only seeing it starting win10 after running the manjaro VM. This may be 'coincidence' though as I have not repeated it many times.
The pertinent VM similarities/differences are, I think:
Manjaro
hostpci0: 01:00.0;01:00.1,romfile=vbios.bin
hostpci1: 00:14 (USB3 controller)
Ubuntu18.04
hostpci0: 01:00.0;01:00.1,pcie=1,romfile=vbios.bin
hostpci1: 00:14
Win10
hostpci0: 01:00.0;01:00.1,pcie=1,romfile=vbios.bin
hostpci1: 00:14
hostpci2: 05:00.0 (NVME)
The standout difference being the Manjaro VM does not use the pcie hostpci option. However, I cannot get it to succesfully display when I do set pcie for that VM.
How might I get Manjaro guest to successfully run with pcie=1? I think this will circumvent the reset bug, for me.
Second, more minor, issue. Whilst running, say, the GPU enchmark in the ubuntu vm, the mouse becomes laggy. This does not happen in the windows vm.
Once I have things running nicely, I aim to rearrange storage and reinstall the guest OS in virtual storage.
Any suggestions gratefully received!
Thanks.
Whilst I had some success, I usually had to reboot the host when restarting a guest. Though behaviour was not consistent.
I recently installed proxmox 6 and have been having more success.
Set up is as follows:
Host:
I disable the iGPU as the motherboard boot behaviour is weird when multi-gpu is enabled. Host CSM is turned off
I have 3 VM's: Manjaro/Arch (100), Ubuntu 18.04(101) and Windows 10 (102).
I use a vbios rom dump of my actual GPU.
GPU and the sole USB controller are passed through to each guest.
Both the Ubuntu and Win10 VM boot pre-existing, baremetal installations, each on separate storage. (One an NVME drive and the other an M2 SSD.) This works without issue for ubuntu and also windows after loading virt network driver.
I have run the Unigen benchmark on both these installations and results are almost the same baremetal vs virtualised.
The Manjaro install uses virtual storage (the default, and lvm backed raw image).
I can successfully stop and restart guests, in various sequences, without a host reboot, but with one exception: If I ever boot the manjaro install, I cannot then succesfully boot the the windows VM. It appears to run but the display does not intialise. I assume this is some incarnation of AMD reset bug. But I am only seeing it starting win10 after running the manjaro VM. This may be 'coincidence' though as I have not repeated it many times.
The pertinent VM similarities/differences are, I think:
Manjaro
hostpci0: 01:00.0;01:00.1,romfile=vbios.bin
hostpci1: 00:14 (USB3 controller)
Ubuntu18.04
hostpci0: 01:00.0;01:00.1,pcie=1,romfile=vbios.bin
hostpci1: 00:14
Win10
hostpci0: 01:00.0;01:00.1,pcie=1,romfile=vbios.bin
hostpci1: 00:14
hostpci2: 05:00.0 (NVME)
The standout difference being the Manjaro VM does not use the pcie hostpci option. However, I cannot get it to succesfully display when I do set pcie for that VM.
How might I get Manjaro guest to successfully run with pcie=1? I think this will circumvent the reset bug, for me.
Second, more minor, issue. Whilst running, say, the GPU enchmark in the ubuntu vm, the mouse becomes laggy. This does not happen in the windows vm.
Once I have things running nicely, I aim to rearrange storage and reinstall the guest OS in virtual storage.
Any suggestions gratefully received!
Thanks.