[SOLVED] Proxmox GPU Pass through not working

dreamworks

Member
Aug 16, 2023
33
7
8
Hey!

specs:

Asrock deskmeet b660
i3-12100f
32gb ram
latest bios (just flashed it)
msi and 6500xt graphics card

I have been spending hours watching various guides etc on YouTube but I can't get this darn thing to work.

I now have the graphics card showing up in windows 11 device manager but the driver will not install, it has a yellow !.

the GPU is also not showing in task manager under the performance tab.

any help or assistance please, not sure if I need to post my config
 
I now have the graphics card showing up in windows 11 device manager but the driver will not install, it has a yellow !.
This means PCIe passthrough is working in principle.

Can you boot your system with another GPU and see if the 6500TX works inside the VM? Did you try this work-around for passthrough of the boot (or single) GPU? Some 6500XT appear to work with passthrough but some don't.

Can you boot the same VM with an Ubuntu Live installer ISO (don't install, just boot from it) to see if you get output on the physical display (to rule out Windows 11 driver issues)?

What is the output of cat /proc/cmdline?
What is the output of lspci -nnk after a Proxmox reboot before starting the VM?
What is the VM configuration file (qm config followed by the VM ID number)?
 
This means PCIe passthrough is working in principle.

Can you boot your system with another GPU and see if the 6500TX works inside the VM? Did you try this work-around for passthrough of the boot (or single) GPU? Some 6500XT appear to work with passthrough but some don't.

Can you boot the same VM with an Ubuntu Live installer ISO (don't install, just boot from it) to see if you get output on the physical display (to rule out Windows 11 driver issues)?

What is the output of cat /proc/cmdline?
What is the output of lspci -nnk after a Proxmox reboot before starting the VM?
What is the VM configuration file (qm config followed by the VM ID number)?
BOOT_IMAGE=/boot/vmlinuz-6.2.16-3-pve root=/dev/mapper/pve-root ro quiet intel_iommu=on

root@proxmox:~# lspci -nnk
00:00.0 Host bridge [0600]: Intel Corporation Device [8086:4630] (rev 05)
Subsystem: ASRock Incorporation Device [1849:4630]
00:01.0 PCI bridge [0604]: Intel Corporation 12th Gen Core Processor PCI Express x16 Controller #1 [8086:460d] (rev 05)
Subsystem: ASRock Incorporation 12th Gen Core Processor PCI Express x16 Controller [1849:460d]
Kernel driver in use: pcieport
00:06.0 PCI bridge [0604]: Intel Corporation 12th Gen Core Processor PCI Express x4 Controller #0 [8086:464d] (rev 05)
Kernel driver in use: pcieport
00:14.0 USB controller [0c03]: Intel Corporation Alder Lake-S PCH USB 3.2 Gen 2x2 XHCI Controller [8086:7ae0] (rev 11)
Subsystem: ASRock Incorporation Alder Lake-S PCH USB 3.2 Gen 2x2 XHCI Controller [1849:7ae0]
Kernel driver in use: xhci_hcd
Kernel modules: xhci_pci
00:14.2 RAM memory [0500]: Intel Corporation Alder Lake-S PCH Shared SRAM [8086:7aa7] (rev 11)
00:15.0 Serial bus controller [0c80]: Intel Corporation Alder Lake-S PCH Serial IO I2C Controller #0 [8086:7acc] (rev 11)
Subsystem: ASRock Incorporation Alder Lake-S PCH Serial IO I2C Controller [1849:7acc]
Kernel driver in use: intel-lpss
Kernel modules: intel_lpss_pci
00:16.0 Communication controller [0780]: Intel Corporation Alder Lake-S PCH HECI Controller #1 [8086:7ae8] (rev 11)
Subsystem: ASRock Incorporation Alder Lake-S PCH HECI Controller [1849:7ae8]
Kernel driver in use: mei_me
Kernel modules: mei_me
00:17.0 SATA controller [0106]: Intel Corporation Alder Lake-S PCH SATA Controller [AHCI Mode] [8086:7ae2] (rev 11)
Subsystem: ASRock Incorporation Alder Lake-S PCH SATA Controller [AHCI Mode] [1849:7ae2]
Kernel driver in use: ahci
Kernel modules: ahci
00:1a.0 PCI bridge [0604]: Intel Corporation Alder Lake-S PCH PCI Express Root Port #25 [8086:7ac8] (rev 11)
Subsystem: ASRock Incorporation Alder Lake-S PCH PCI Express Root Port [1849:7ac8]
Kernel driver in use: pcieport
00:1c.0 PCI bridge [0604]: Intel Corporation Alder Lake-S PCH PCI Express Root Port #1 [8086:7ab8] (rev 11)
Subsystem: ASRock Incorporation Alder Lake-S PCH PCI Express Root Port [1849:7ab8]
Kernel driver in use: pcieport
00:1f.0 ISA bridge [0601]: Intel Corporation Device [8086:7a86] (rev 11)
Subsystem: ASRock Incorporation Device [1849:7a86]
00:1f.3 Audio device [0403]: Intel Corporation Alder Lake-S HD Audio Controller [8086:7ad0] (rev 11)
Subsystem: ASRock Incorporation Alder Lake-S HD Audio Controller [1849:4899]
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel, snd_sof_pci_intel_tgl
00:1f.4 SMBus [0c05]: Intel Corporation Alder Lake-S PCH SMBus Controller [8086:7aa3] (rev 11)
Subsystem: ASRock Incorporation Alder Lake-S PCH SMBus Controller [1849:7aa3]
Kernel driver in use: i801_smbus
Kernel modules: i2c_i801
00:1f.5 Serial bus controller [0c80]: Intel Corporation Alder Lake-S PCH SPI Controller [8086:7aa4] (rev 11)
Subsystem: ASRock Incorporation Alder Lake-S PCH SPI Controller [1849:7aa4]
Kernel driver in use: intel-spi
Kernel modules: spi_intel_pci
00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (17) I219-V [8086:1a1d] (rev 11)
Subsystem: ASRock Incorporation Ethernet Connection (17) I219-V [1849:1a1d]
Kernel driver in use: e1000e
Kernel modules: e1000e
01:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Upstream Port of PCI Express Switch [1002:1478] (rev c1)
Kernel driver in use: pcieport
02:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Downstream Port of PCI Express Switch [1002:1479]
Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Downstream Port of PCI Express Switch [1002:1479]
Kernel driver in use: pcieport
03:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 24 [Radeon RX 6400/6500 XT/6500M] [1002:743f] (rev c1)
Subsystem: Micro-Star International Co., Ltd. [MSI] Navi 24 [Radeon RX 6400/6500 XT/6500M] [1462:5080]
Kernel driver in use: vfio-pci
Kernel modules: amdgpu
03:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21/23 HDMI/DP Audio Controller [1002:ab28]
Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21/23 HDMI/DP Audio Controller [1002:ab28]
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel
04:00.0 Non-Volatile memory controller [0108]: Phison Electronics Corporation E12 NVMe Controller [1987:5012] (rev 01)
Subsystem: Phison Electronics Corporation E12 NVMe Controller [1987:5012]
Kernel driver in use: nvme
Kernel modules: nvme
05:00.0 Non-Volatile memory controller [0108]: Micron/Crucial Technology P2 NVMe PCIe SSD [c0a9:540a] (rev 01)
Subsystem: Micron/Crucial Technology P2 NVMe PCIe SSD [c0a9:5021]
Kernel driver in use: nvme
Kernel modules: nvme


will try workaround now
 
args: -cpu 'host,+svm,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off,hypervisor=off'
bios: ovmf
boot: order=ide0
cores: 4
cpu: host,hidden=1,flags=+pcid
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:03:00,pcie=1
ide0: local-lvm:vm-100-disk-1,size=100G
ide2: none,media=cdrom
machine: pc-q35-6.2
memory: 8152
meta: creation-qemu=8.0.2,ctime=1692196148
name: Windows11
net0: e1000=76:2F:8D:BE:53:AA,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsihw: virtio-scsi-single
smbios1: uuid=17334024-3e51-4e51-9404-81fe880fabdc
sockets: 2
tpmstate0: local-lvm:vm-100-disk-2,size=4M,version=v2.0
usb0: host=0c45:7403,usb3=1
vmgenid: 92fb16bf-6c52-4950-8927-f6ea19b009fa
 
args: -cpu 'host,+svm,+kvm_pv_unhalt,+kvm_pv_eoi,hv_vendor_id=NV43FIX,kvm=off,hypervisor=off'
cpu: host,hidden=1,flags=+pcid
You don't need those (old) NVidia work-arounds that might even interfere with AMD drivers. cpu:host is fine.
machine: pc-q35-6.2
Why such an old QEMU machine version? AMD drivers work with 7.2 currently (on Windows 10, I don't have 11).
memory: 8152
Why not a normal power of 2 like 8192? (Not related to passthrough.)
numa: 0
sockets: 2
Why two sockets when not using NUMA and on a one socket system? (Not related to passthrough.)
will try workaround now
Does that mean you have only one GPU? Can you try booting with another GPU? Is your 6500XT known to work with passthrough by other people?
 
You don't need those (old) NVidia work-arounds that might even interfere with AMD drivers. cpu:host is fine.

Why such an old QEMU machine version? AMD drivers work with 7.2 currently (on Windows 10, I don't have 11).

Why not a normal power of 2 like 8192? (Not related to passthrough.)

Why two sockets when not using NUMA and on a one socket system? (Not related to passthrough.)

Does that mean you have only one GPU? Can you try booting with another GPU? Is your 6500XT known to work with passthrough by other people?
only 1 gpu ye, my 4070ti won't fit in the small deskmeet case
 
I created a brand new vm to test it's config



bios: ovmf
boot: order=ide0;ide2;net0
cores: 8
cpu: x86-64-v2-AES
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:03:00,pcie=1
ide0: local-lvm:vm-100-disk-1,size=100G
ide2: local:iso/Windows_11_22H2__x64__16in1__-_Office_2021_by_Eagle123__01.2023_.iso,media=cdrom,size=4519920K
machine: pc-q35-8.0
memory: 8192
meta: creation-qemu=8.0.2,ctime=1692218056
name: Windows11
net0: e1000=22:9B:C6:A1:84:AC,bridge=vmbr0,firewall=1
numa: 1
ostype: win11
scsihw: virtio-scsi-pci
smbios1: uuid=5954e187-93a1-4d4b-952d-1011448ec2bc
sockets: 1
tpmstate0: local-lvm:vm-100-disk-2,size=4M,version=v2.0
usb0: host=0c45:7403,usb3=1
vmgenid: a2ccb109-df76-45b7-9182-ea5e4ecca619
 
Last edited:
I created a brand new vm to test it's config



bios: ovmf
boot: order=ide0;ide2;net0
cores: 8
cpu: x86-64-v2-AES
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:03:00,pcie=1
ide0: local-lvm:vm-100-disk-1,size=100G
ide2: local:iso/Windows_11_22H2__x64__16in1__-_Office_2021_by_Eagle123__01.2023_.iso,media=cdrom,size=4519920K
machine: pc-q35-8.0
memory: 8192
meta: creation-qemu=8.0.2,ctime=1692218056
name: Windows11
net0: e1000=22:9B:C6:A1:84:AC,bridge=vmbr0,firewall=1
numa: 1
ostype: win11
scsihw: virtio-scsi-pci
smbios1: uuid=5954e187-93a1-4d4b-952d-1011448ec2bc
sockets: 1
tpmstate0: local-lvm:vm-100-disk-2,size=4M,version=v2.0
usb0: host=0c45:7403,usb3=1
vmgenid: a2ccb109-df76-45b7-9182-ea5e4ecca619
upon doing a new vm install the driver installed and looked like it was working but when I restarted it reverted back to a yellow !

MSI Radeon RX 6500 XT MECH 2X 4G OC Gaming Graphics Card - 4GB GDDR6, 2825 MHz, PCI Express 4 x 4, 64-bit, 1 x DP v 1.4a, HDMI 2.1​

 
SOLVED BY DISABLING CAM IN BIOS, also this config:

/etc/modules:

Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

/etc/modprobe.d/pve-blacklist.conf

blacklist nvidiafb blacklist amdgpu blacklist radeon blacklist ati

/etc/modprobe.d/vfio.conf

Code:
options kvm ignore_msrs=1x
options vfio-pci ids=1002:743f,1002:ab28 disable_vga=1


/etc/pve/qemu-server/100.conf

balloon: 0 bios: ovmf boot: order=ide0;ide2;net0 cores: 8 cpu: host,hidden=1,flags=+aes efidisk0: local-lvm:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M hostpci0: 0000:03:00,pcie=1 ide0: local-lvm:vm-100-disk-1,size=100G ide2: none,media=cdrom machine: pc-q35-8.0 memory: 8192 meta: creation-qemu=8.0.2,ctime=1692218056 name: Windows11 net0: e1000=22:9B:C6:A1:84:AC,bridge=vmbr0,firewall=1 numa: 0 ostype: win11 scsihw: virtio-scsi-single smbios1: uuid=5954e187-93a1-4d4b-952d-1011448ec2bc sockets: 1 tpmstate0: local-lvm:vm-100-disk-2,size=4M,version=v2.0 usb0: host=0c45:7403,usb3=1 vga: std vmgenid: a2ccb109-df76-45b7-9182-ea5e4ecca619



/etc/default/grub
GRUB_DEFAULT=0 GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt vfio_iommu_type1 init> GRUB_CMDLINE_LINUX=""

and then
update-initramfs -k all -u && update-grub && pve-efiboot-tool refresh && update-grub2
 
  • Like
Reactions: leesteken
SOLVED BY DISABLING CAM IN BIOS, also this config:
Is CAM the ASRock name for Resizeble BAR or Smart Access Memory? That's indeed not supported by QEMU/Proxmox.
/etc/modules:

Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
Note that vfio_virqfd no longer exists in Proxmox 8.
/etc/modprobe.d/pve-blacklist.conf

blacklist nvidiafb blacklist amdgpu blacklist radeon blacklist ati
No need to blacklist drivers that you do not use: radeon, ati, nvidiafb.
/etc/modprobe.d/vfio.conf

Code:
options kvm ignore_msrs=1x
options vfio-pci ids=1002:743f,1002:ab28 disable_vga=1
You could add a softdep amdgpu pre: vfio-pci to make sure vfio-pci is loaded before amdgpu. Then you don't need to blacklist amdgpu.
/etc/pve/qemu-server/100.conf

balloon: 0 bios: ovmf boot: order=ide0;ide2;net0 cores: 8 cpu: host,hidden=1,flags=+aes
hidden is not necessary.
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M hostpci0: 0000:03:00,pcie=1 ide0: local-lvm:vm-100-disk-1,size=100G ide2: none,media=cdrom machine: pc-q35-8.0 memory: 8192 meta: creation-qemu=8.0.2,ctime=1692218056 name: Windows11 net0: e1000=22:9B:C6:A1:84:AC,bridge=vmbr0,firewall=1 numa: 0 ostype: win11 scsihw: virtio-scsi-single smbios1: uuid=5954e187-93a1-4d4b-952d-1011448ec2bc sockets: 1 tpmstate0: local-lvm:vm-100-disk-2,size=4M,version=v2.0 usb0: host=0c45:7403,usb3=1 vga: std
I would expect vga: none instead.
vmgenid: a2ccb109-df76-45b7-9182-ea5e4ecca619

/etc/default/grub
GRUB_DEFAULT=0 GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt vfio_iommu_type1 init> GRUB_CMDLINE_LINUX=""
Why the vfio_iommu_type1 init. I don't think that's a valid kernel parameter.
and then
update-initramfs -k all -u && update-grub && pve-efiboot-tool refresh && update-grub2
What old version of Proxmox are you using? pve-efiboot-tool is named proxmox-boot-tool nowadays. Why two update-grub, as I expect update-initframfs to already do this for you.
Be careful with k-k all as it affects previous kernels and if there is a mistake then you cannot boot an earlier kernel to fix it.

Glad you got is working. Please edit the first post and select Solved to let others know there is a solution.
 
Code:
Is CAM the ASRock name for Resizeble BAR or Smart Access Memory? That's indeed not supported by QEMU/Proxmox.

Yes it is, I will update my config given your feedback! thanks!

Code:
I would expect vga: none instead.

I have actually set it as vga: memory=128, so that I can use VNC and GPU passthrough at the same time, then set the vnc monitor as main display so I can use it through vnc should I not be home.

Code:
Why the vfio_iommu_type1 init. I don't think that's a valid kernel parameter.

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt vfio_iommu_type1 initcallblacklist=sysfb_init"
 
Last edited:
Code:
Is CAM the ASRock name for Resizeble BAR or Smart Access Memory? That's indeed not supported by QEMU/Proxmox.

Yes it is, I will update my config given your feedback! thanks!

Code:
I would expect vga: none instead.

I have actually set it as vga: memory=128, so that I can use VNC and GPU passthrough at the same time, then set the vnc monitor as main display so I can use it through vnc should I not be home.

Code:
Why the vfio_iommu_type1 init. I don't think that's a valid kernel parameter.

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt vfio_iommu_type1 initcallblacklist=sysfb_init"
I do indeed confirm that disabling the RESIZABLE BAR make it fully working the passthough!

Actually i think this depends on the configuration: I've one very very similar to yours: alder lake + RDNA2: i5 12400 + Radeon 6400 low profile.

With my old server, i3 9100 but using the very same videocard (radeon 6400) i was able to use it in the passthrough VM even with resizable bar enable. But I was in linux with kvm, qemu and virt-manager, not in proxmox.
 
I can also confirm that passthrough is fully working after disabling RESIZABLE BAR for my AMD Radeon 6650M in my HX99G Minisforum Gaming PC.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!