[SOLVED] GPU passthrough issue, 6.1 & Radeon, Blank screen

Veeh

Well-Known Member
Jul 2, 2017
67
13
48
37
Dear Proxmox community,

I come with another GPU passthrough issue.

I have a setup with 2 Radeon R9 290, proxmox 6.1.3 and intel cpu.

Both GC they have their own iommu group

Code:
[    0.896540] pci 0000:01:00.0: Adding to iommu group 1
[    0.896545] pci 0000:01:00.1: Adding to iommu group 1
[    0.896598] pci 0000:02:00.0: Adding to iommu group 16
[    0.896647] pci 0000:03:00.0: Adding to iommu group 17
[    0.896699] pci 0000:04:00.0: Adding to iommu group 18
[    0.896741] pci 0000:05:01.0: Adding to iommu group 19
[    0.896787] pci 0000:05:03.0: Adding to iommu group 20
[    0.896827] pci 0000:05:05.0: Adding to iommu group 21
[    0.896876] pci 0000:05:07.0: Adding to iommu group 22
[    0.896886] pci 0000:07:00.0: Adding to iommu group 20
[    0.896897] pci 0000:08:00.0: Adding to iommu group 21
[    0.896914] pci 0000:09:03.0: Adding to iommu group 21
[    0.896927] pci 0000:09:07.0: Adding to iommu group 21
[    0.896942] pci 0000:0a:00.0: Adding to iommu group 21
[    0.896956] pci 0000:0b:00.0: Adding to iommu group 21
[    0.896966] pci 0000:0c:00.0: Adding to iommu group 22
[    0.897029] pci 0000:0d:00.0: Adding to iommu group 23
[    0.897058] pci 0000:0d:00.1: Adding to iommu group 23
[    0.897103] pci 0000:0e:00.0: Adding to iommu group 24

GC are 01:00 and 0d:00
Code:
01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Hawaii PRO [Radeon R9 290/390] [1002:67b1]
        Subsystem: PC Partner Limited / Sapphire Technology Hawaii PRO [Radeon R9 290/390] [174b:e283]
        Kernel driver in use: vfio-pci
        Kernel modules: radeon, amdgpu
01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Hawaii HDMI Audio [Radeon R9 290/290X / 390/390X] [1002:aac8]
        Subsystem: PC Partner Limited / Sapphire Technology Hawaii HDMI Audio [Radeon R9 290/290X / 390/390X] [174b:aac8]
        Kernel driver in use: vfio-pci
        Kernel modules: snd_hda_intel
0d:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Hawaii PRO [Radeon R9 290/390] [1002:67b1]
        Subsystem: PC Partner Limited / Sapphire Technology Hawaii PRO [Radeon R9 290/390] [174b:e283]
        Kernel driver in use: vfio-pci
        Kernel modules: radeon, amdgpu
0d:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Hawaii HDMI Audio [Radeon R9 290/290X / 390/390X] [1002:aac8]
        Subsystem: PC Partner Limited / Sapphire Technology Hawaii HDMI Audio [Radeon R9 290/290X / 390/390X] [174b:aac8]
        Kernel driver in use: vfio-pci
        Kernel modules: snd_hda_intel

Each card is assigned to its own VM.

VM 102
Code:
bios: ovmf
bootdisk: sata0
cores: 4
cpu: host
hostpci0: 01:00,pcie=1
hotplug: disk
lock: backup
machine: q35
memory: 8192
name: Win10-C
net0: e1000=ea:15:c4:f3:61:80,bridge=vmbr2
numa: 0
ostype: win10
sata0: M2:102/vm-102-disk-0.qcow2,cache=writethrough,size=300G
sata1: SAN:102/vm-102-disk-1.qcow2,size=500G,backup=no
scsihw: virtio-scsi-pci
smbios1: uuid=2af7a7f2-f166-4096-b0f9-b6aa1cb9576d
sockets: 1
startup: order=3
vga: virtio

VM 103
Code:
bios: ovmf
bootdisk: sata0
cores: 4
cpu: host
hostpci0: 0d:00,pcie=1
hotplug: disk
machine: q35
memory: 8192
name: Win10-D
net0: e1000=da:78:41:60:ea:74,bridge=vmbr3
numa: 0
ostype: win10
sata0: M2:103/vm-103-disk-0.qcow2,cache=writethrough,size=300G
sata1: SAN:103/vm-103-disk-1.qcow2,size=500G,backup=no
scsihw: virtio-scsi-pci
smbios1: uuid=6b6c753c-2cde-414d-ae0d-47ba63dda751
snaptime: 1587577147
sockets: 1
startup: order=1
usb0: host=3-9.4,usb3=1
vga: virtio

Both VM are booting and I was able to install the latest radeon driver (Adrenalin 2020 Edition 20.2.2) on both VM.
I can reach them in RDP and i can start a game without issue.

Now i would like to plug a screen on the GC and stop using RDP.
I was able to make this work with proxmox5, and I don't recall needing anything else.

Sadly, with this actual setup, the screen turn on, but stays blank.
I tryed to tweak the VM conf a little bit, but the error 43 comes back
I found this log when i start the VM
Code:
[ 5162.639316] vfio-pci 0000:0d:00.0: vfio_ecap_init: hiding ecap 0x19@0x270
[ 5162.639326] vfio-pci 0000:0d:00.0: vfio_ecap_init: hiding ecap 0x1b@0x2d0

I'm a bit lost.
I would appreciate a lot your gidance, if you ever had this issue.

Thanks

Veeh.
 
Hello,

I manage to finally make it work with primary GPU on.

This is the VM config that work
Code:
bios: ovmf
bootdisk: sata0
cores: 4
cpu: host,hidden=1,flags=+pcid
efidisk0: M2:103/vm-103-disk-2.qcow2,size=128K
hostpci0: 0d:00.0,pcie=1,x-vga=1
hotplug: disk
machine: q35
memory: 8192
name: Win10-D
net0: e1000=da:78:41:60:ea:74,bridge=vmbr3
numa: 0
ostype: win10
parent: maj_driver
sata0: M2:103/vm-103-disk-0.qcow2,cache=writethrough,size=300G
sata1: SAN:103/vm-103-disk-1.qcow2,size=500G,backup=no
scsihw: virtio-scsi-pci
smbios1: uuid=6b6c753c-2cde-414d-ae0d-47ba63dda751
sockets: 1
usb0: host=3-9.4,usb3=1

So this time I don't have the VNC interface anymore, I can still RDP into my VM, but no progess on the real monitor.
It stays blank.

Do you know any other log that might be helpfull to trouble shoot this issue ?
Thanks.

Veeh
 
config looks ok so far, aside from that i would personally passthrough the whole card not only the video function, i.e.
Code:
hostpci0: 0d:00,pcie=1,x-vga=1

are you sure that your video card bios supports efi?
also does the windows log say anything? (event viewer)
 
Hello,

Yes you'r right, I posted this vm conf but right after I change it to pass everything in the passthrough.

I was able to make it work.
It might have been the addition of the efidisk. (I also added the CPU flag)
I think my issue is really similar to what happend to scrapiron here: https://forum.proxmox.com/threads/proxmox-6-and-nvidia-gpu-pass-through-issue.68814/#post-308542


VM 102 work:
Code:
bios: ovmf
bootdisk: sata0
cores: 4
cpu: host,hidden=1,flags=+pcid
efidisk0: M2:102/vm-102-disk-2.qcow2,size=128K
hostpci0: 01:00,pcie=1
hotplug: disk
machine: q35
memory: 8192
name: Win10-C
net0: e1000=ea:15:c4:f3:61:80,bridge=vmbr2
numa: 0
ostype: win10
sata0: M2:102/vm-102-disk-0.qcow2,cache=writethrough,size=300G
sata1: SAN:102/vm-102-disk-1.qcow2,size=500G,backup=no
scsihw: virtio-scsi-pci
smbios1: uuid=2af7a7f2-f166-4096-b0f9-b6aa1cb9576d
sockets: 1
startup: order=3

On this VM, when I boot up, i've a display in VNC and real monitor.
I configured Win10 to use only the real monitor, and now i've a black screen in VNC.
I don't know if it's the intended way, but it's working. And it's not really a problem if vnc stay black, I don't plan to use it.

The 2nd VM, VM 103.
I basicaly reproduce the same VM conf. I have a display in VNC but no input on the real monitor.

Since i'm using 2 GC that are the same (same ID)
Does this mean I need to put 2 line in /etc/modprobe.d/vfio.conf ?

Both card support efi bios (type 0)
Code:
root@pxmx:/tmp/rom-parser# ./rom-parser r9vbios2.bin 
Valid ROM signature found @0h, PCIR offset 244h
        PCIR: type 0 (x86 PC-AT), vendor: 1002, device: 67b1, class: 030000
        PCIR: revision 0, vendor revision: f2c
        Last image
I'll check win event viewer.
 
Last edited:
Ok, so everything is settle now... I guess I was trying to solve the issue on both VM at the same time, and I mixed myself up in the process.

Both VM are OK and I have a display on the physical monitor on both VM.

From my initial conf (initial post)
I added additional options on the grub
Code:
GRUB_DISTRIBUTOR="Proxmox Virtual Environment"
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on pcie_acs_override=downstream,multifunction video=efifb:eek:ff"
GRUB_CMDLINE_LINUX=""

I added the efi disk on each VM and cpu flags:
Code:
cpu: host,hidden=1,flags=+pcid
efidisk0: M2:102/vm-102-disk-2.qcow2,size=128K

I added blacklist driver to pve-blacklist.conf

I have set the vga display on "default" and unselect "primary gpu/x-vga)

And that's it.
I have display on both vnc (web interface) and physical monitor on each VM.

Thank dcsapak for your help.

Cheers
Veeh
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!