[SOLVED] GPU passthrough issues with Windows 11 VM

gProxiA

Member
May 20, 2020
31
4
13
Hello,

i'm facing the following problem with GPU passtrough and IOMMU groups under Proxmox 7.4-3:

I am trying to passtrough a Nvidia Quadro P2200 to a Windows 11 VM.
First, the GPU and the NVMe RAID controller end up in the same IOMMU group. The NVMe RAID controller is not used, but the fact that the GPU passthrough does not work is probably because the two devices are in the same group.
Furthermore, the ID of the GPU is 5 digits long. After adding it to the VM, Proxmox shows me the message "invalid value".
The start of the VM does not work and as a result the GPU becomes disconnected.

Here is my Configuration:
/etc/default/grub
GRUB_DEFAULT=0 GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction"

/etc/modprobe.d/blacklist.conf
blacklist nvidia blacklist amdgpu blacklist radeon blacklist nouveau


Output of: for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done
IOMMU group 150 0000:d7:05.5 RAID bus controller [0104]: Intel Corporation Volume Management Device NVMe RAID Controller [8086:201d] (rev 07)
IOMMU group 150 10005:00:00.0 PCI bridge [0604]: Intel Corporation Sky Lake-E PCI Express Root Port A [8086:2030] (rev 07)
IOMMU group 150 10005:01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106GL [Quadro P2200] [10de:1c31] (rev a1)
IOMMU group 150 10005:01:00.1 Audio device [0403]: NVIDIA Corporation GP106 High Definition Audio Controller [10de:10f1] (rev a1)

Config of VM:
agent: 1
balloon: 0
bios: ovmf
boot: order=scsi0;ide2;net0;ide0
cores: 16
efidisk0: local-lvm:vm-106-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 10005:01:00.0,pcie=1
ide0: local:iso/virtiowin.iso,media=cdrom,size=522284K
ide2: local:iso/win11.iso,media=cdrom,size=5426116K
machine: q35
memory: 100000
meta: creation-qemu=7.2.0,ctime=1682540371
name: Windows11.2
net0: virtio=1A:96:3B:91:8D:21,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-106-disk-1,iothread=1,size=100G
scsi1: raid:vm-106-disk-0,backup=0,iothread=1,size=3000G
scsihw: virtio-scsi-single
smbios1: uuid=2c5f08ee-d1ca-4db6-982a-d32fd56cc5b0
sockets: 2
tpmstate0: local-lvm:vm-106-disk-2,size=4M,version=v2.0
vga: none
vmgenid: 05fe0e83-874c-457e-b0c8-103002a86088

Error Message after Start:
May 23 11:34:37 pve pvedaemon[1662670]: <root@pam> starting task UPID:pve:00317F4D:0731339A:646C88AD:qmstart:106:root@pam:
May 23 11:34:37 pve pvedaemon[3243853]: start VM 106: UPID:pve:00317F4D:0731339A:646C88AD:qmstart:106:root@pam:
May 23 11:34:37 pve kernel: [1206650.264800] vfio-pci 10005:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none
May 23 11:34:38 pve kernel: [1206650.284875] pci 10005:01:00.0: Removing from iommu group 150
May 23 11:34:38 pve kernel: [1206650.284974] pci 10005:01:00.1: Removing from iommu group 150
May 23 11:34:38 pve kernel: [1206650.284990] pci_bus 10005:01: busn_res: [bus 01] is released
May 23 11:34:38 pve kernel: [1206650.285126] pci 10005:00:00.0: Removing from iommu group 150
May 23 11:34:38 pve kernel: [1206650.285132] pci_bus 10005:00: busn_res: [bus 00-1f] is released
May 23 11:34:38 pve pvedaemon[3243853]: Use of uninitialized value $name in concatenation (.) or string at /usr/share/perl5/PVE/SysFSTools.pm line 283.
May 23 11:34:38 pve pvedaemon[3243853]: no PCI device found for '10005:01:00.0'
May 23 11:34:38 pve pvedaemon[3243853]: can't reset PCI device '10005:01:00.0'
May 23 11:34:38 pve pvedaemon[1662670]: <root@pam> end task UPID:pve:00317F4D:0731339A:646C88AD:qmstart:106:root@pam: can't reset PCI device '10005:01:00.0'

I have been through all the threads, but unfortunately nothing has led to a solution yet.

Thanks
g
 
I finally solved the Problem.

The settings in the bios for the configuration of NVMe raids (VMD NVMe RAID) were to blame. The VMD setting for each IOU was enabled and prevented, that the GPU got it's own IOMMU group, so I disabled the VMD for each IOU stack.
 
  • Like
Reactions: leesteken and bobmc

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!