Can I use one of two GPUs for passthrough and the other for VirGL?

karypid

Member
Mar 7, 2021
30
8
13
47
Hello,

I have two Radeon GPUs on my system: one 6700XT and one 6800. I have been using both with passthrough for a dual-seat configuration.

I now want to be able to use VirGL with one of the two. As I understand it, in order to do this the host must load the amdgpu drivers.

I have tried changing my modprobe configuration:

Code:
# BOTH        - options vfio-pci ids=1002:ab28,1462:3982,1849:5203 disable_idle_d3=1 disable_vga=1
# 6700XT only - options vfio-pci ids=1002:ab28,1462:3982 disable_idle_d3=1 disable_vga=1

Of the two lines above I used to have the "BOTH" line which worked fine for passing through both cards to two separate VMs. Now I tried the "6700XT only" line but I cant get it to work properly.

Is what I want possible?
 
Sure, but It's not clear to me what is not working for you. Please show the output of lspci -knn for both GPUs and make sure to install the libraries for VirGL.

What does not work? The passthrough of the 6700XT or using VirtGL (which only supports 512MB graphics memory per VM)?
Hello leesteken,

First of all here is the output:
Code:
0a:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21 [Radeon RX 6800/6800 XT / 6900 XT] [1002:73bf] (rev c3)
        Subsystem: ASRock Incorporation Navi 21 [Radeon RX 6800/6800 XT / 6900 XT] [1849:5203]
        Kernel driver in use: vfio-pci
        Kernel modules: amdgpu
0a:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21/23 HDMI/DP Audio Controller [1002:ab28]
        Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21/23 HDMI/DP Audio Controller [1002:ab28]
        Kernel driver in use: vfio-pci
        Kernel modules: snd_hda_intel

0d:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 22 [Radeon RX 6700/6700 XT/6750 XT / 6800M/6850M XT] [1002:73df] (rev c5)
        Subsystem: Micro-Star International Co., Ltd. [MSI] Navi 22 [Radeon RX 6700/6700 XT/6750 XT / 6800M/6850M XT] [1462:3982]
        Kernel driver in use: vfio-pci
        Kernel modules: amdgpu
0d:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21/23 HDMI/DP Audio Controller [1002:ab28]
        Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21/23 HDMI/DP Audio Controller [1002:ab28]
        Kernel driver in use: vfio-pci
        Kernel modules: snd_hda_intel

By removing the 1849:5203 part I am trying to allow the 6800 to be used by the host. When I reboot the host, I expect only 1462:3982 (the 6700XT) to be isolated (and available for passthrough) while the 6800 should be available on the host.

On the host system I have:

Code:
root@pve:~# lsmod | grep amdgpu
root@pve:~# lsmod | grep drm
drm                   729088  0

When I configure a VM to ues VirtIO-VirGL as an adapter, attempts to start it come back with: no DRM render node detected (/dev/dri/renderD*), no GPU? - needed for 'virtio-gl' display. Indeed:

Code:
root@pve:~# ls /dev/dri
ls: cannot access '/dev/dri': No such file or directory

I guess the reason is that the host has not loaded the driver for the 6800 and it is still isolated?

I do see the 6700XT (which still works, the VM with it starts fine and uses it) report the following:

Code:
root@pve:~# dmesg | grep vfio
[    6.704696] vfio_pci: add [1002:ab28[ffffffff:ffffffff]] class 0x000000/00000000
[    6.704707] vfio_pci: add [1462:3982[ffffffff:ffffffff]] class 0x000000/00000000
[   32.141808] vfio-pci 0000:0d:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=none
[   32.142248] vfio-pci 0000:0d:00.0: vgaarb: changed VGA decodes: olddecodes=none,decodes=io+mem:owns=none
[   32.142318] vfio-pci 0000:0d:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=none
[   36.602665] vfio-pci 0000:0d:00.0: enabling device (0002 -> 0003)
[   36.605029] vfio-pci 0000:0d:00.1: enabling device (0000 -> 0002)
 
I have made some progress:
  1. Running modbprobe amdgpu got this to work. After loading the driver there is now a /dev/dri directory and I am now able to start VMs with VirGL as the display.
  2. The reason the driver was not loaded was that I had blacklisted amdgpu.
I have removed the blacklisting from /etc/modprobe.d/blacklist.conf and things seem to work now.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!