Help with pass-through PCIE for J5005 iGPU

mrfogal

New Member
Jun 26, 2022
2
0
1
So am using a Dell Wyse 5070 Thin client which has a J5005 Intel Silver and want to passthrough the iGPU an intel UHD605 for transcoding in Plex and am hoping someone can help.

Have tried following tutorial here for passthrough of gpu to one of my VM's however having difficulty and got a bit lost.

Followed this:
https://pve.proxmox.com/wiki/PCI(e)_Passthrough

And at first wanted to follow the vGPU bit at the end but worried the J5005 is not supported as do not seem to get any results for

Code:
ls /sys/bus/pci/devices/0000:00:02.0/mdev_supported_types

This just returns no files etc.

dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
Code:
[    0.011390] ACPI: DMAR 0x0000000077F0A190 0000A8 (v01 INTEL  GLK-SOC  00000003 BRXT 0100000D)
[    0.011463] ACPI: Reserving DMAR table memory at [mem 0x77f0a190-0x77f0a237]
[    0.079655] DMAR: Disable GFX device mapping
[    0.079657] DMAR: IOMMU enabled
[    0.239120] DMAR: Host address width 39
[    0.239123] DMAR: DRHD base: 0x000000fed64000 flags: 0x0
[    0.239134] DMAR: dmar0: reg_base_addr fed64000 ver 1:0 cap 1c0000c40660462 ecap 9e2ff0505e
[    0.239139] DMAR: DRHD base: 0x000000fed65000 flags: 0x1
[    0.239148] DMAR: dmar1: reg_base_addr fed65000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.239154] DMAR: RMRR base: 0x00000077e73000 end: 0x00000077e92fff
[    0.239158] DMAR: RMRR base: 0x0000007b800000 end: 0x0000007fffffff
[    0.239163] DMAR-IR: IOAPIC id 1 under DRHD base  0xfed65000 IOMMU 1
[    0.239166] DMAR-IR: HPET id 0 under DRHD base 0xfed65000
[    0.239169] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.241492] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.744496] DMAR: No ATSR found
[    0.744498] DMAR: No SATC found
[    0.744503] DMAR: dmar1: Using Queued invalidation
[    0.746719] DMAR: Intel(R) Virtualization Technology for Directed I/O

That as per guide shows the IOMMU enabled however unsure if Directed I/O or Interrupt Remapping is enabled does look like last line shows this.

lspci -nnk
Code:
00:02.0 VGA compatible controller [0300]: Intel Corporation GeminiLake [UHD Graphics 605] [8086:3184] (rev 03)
        Subsystem: Dell UHD Graphics 605 [1028:080c]
        Kernel driver in use: i915
        Kernel modules: i915

I have had this showing both i915 and vfio-pci depending on the blacklist step. I originally tried with the vGPU so figured I would need is to show i915 then enable the pcie device and select mdev type and boot VM but that mdev type never populated. I am now wondering weather I can just passthrough the gpu and use in one VM with host device passthrough method so my grub conf "/etc/default/grub":

Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

and added i915 to blacklist so
lspci -nnk returns:

Code:
00:02.0 VGA compatible controller [0300]: Intel Corporation GeminiLake [UHD Graphics 605] [8086:3184] (rev 03)
        Subsystem: Dell UHD Graphics 605 [1028:080c]
        Kernel driver in use: vfio-pci
        Kernel modules: i915

So set that on pcie with:
Code:
qm set 102 -hostpci0 00:02.0

However starting the VM just returns this:

Code:
kvm: -device vfio-pci,host=0000:00:02.0,id=hostpci0,bus=pci.0,addr=0x10: vfio 0000:00:02.0: error getting device from group 1: Invalid argument
Verify all devices in group 1 are bound to vfio-<bus> or pci-stub and not already in use
TASK ERROR: start failed: QEMU exited with code 1

Now I have read about 10 of these threads and not had much luck and probably made a bit of a mess changing random things cause I do not know what I am doing but hoping someone can help suggest how to fix this error.

This bit I did not get in the guide:

It is also important that the device(s) you want to pass through are in a separate IOMMU group. This can be checked with:

# find /sys/kernel/iommu_groups/ -type l
Code:
/sys/kernel/iommu_groups/7/devices/0000:00:14.1
/sys/kernel/iommu_groups/5/devices/0000:00:13.0
/sys/kernel/iommu_groups/13/devices/0000:02:00.0
/sys/kernel/iommu_groups/3/devices/0000:00:0f.0
/sys/kernel/iommu_groups/11/devices/0000:00:1f.0
/sys/kernel/iommu_groups/11/devices/0000:00:1f.1
/sys/kernel/iommu_groups/1/devices/0000:00:02.0
/sys/kernel/iommu_groups/8/devices/0000:00:15.0
/sys/kernel/iommu_groups/6/devices/0000:00:14.0
/sys/kernel/iommu_groups/14/devices/0000:03:00.0
/sys/kernel/iommu_groups/4/devices/0000:00:12.0
/sys/kernel/iommu_groups/12/devices/0000:01:00.0
/sys/kernel/iommu_groups/2/devices/0000:00:0e.0
/sys/kernel/iommu_groups/10/devices/0000:00:1c.0
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/0/devices/0000:00:00.3
/sys/kernel/iommu_groups/9/devices/0000:00:17.0
/sys/kernel/iommu_groups/9/devices/0000:00:17.3

As was unsure from the output about the device being in a separate group was thinking that as the id is listed there that it is.

UPDATE

Ok so now i've changed my grub to look like this
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on,igfx_off pcie_acs_override=downstream,multifunction video=efifb:eek:ff video=vesa:eek:ff vfio-pci.ids=8086:3184 vfio_iommu_type1.allow_unsafe_interrupts=1 kvm.ignore_msrs=1"

dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
Code:
[    0.000000] Warning: PCIe ACS overrides enabled; This may allow non-IOMMU protected peer-to-peer DMA
[    0.011743] ACPI: DMAR 0x0000000077F0A190 0000A8 (v01 INTEL  GLK-SOC  00000003 BRXT 0100000D)
[    0.011814] ACPI: Reserving DMAR table memory at [mem 0x77f0a190-0x77f0a237]
[    0.080031] DMAR: IOMMU enabled
[    0.080033] DMAR: Disable GFX device mapping
[    0.239778] DMAR: Host address width 39
[    0.239780] DMAR: DRHD base: 0x000000fed64000 flags: 0x0
[    0.239792] DMAR: dmar0: reg_base_addr fed64000 ver 1:0 cap 1c0000c40660462 ecap 9e2ff0505e
[    0.239797] DMAR: DRHD base: 0x000000fed65000 flags: 0x1
[    0.239806] DMAR: dmar1: reg_base_addr fed65000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.239822] DMAR: RMRR base: 0x00000077e73000 end: 0x00000077e92fff
[    0.239826] DMAR: RMRR base: 0x0000007b800000 end: 0x0000007fffffff
[    0.239831] DMAR-IR: IOAPIC id 1 under DRHD base  0xfed65000 IOMMU 1
[    0.239834] DMAR-IR: HPET id 0 under DRHD base 0xfed65000
[    0.239837] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.242155] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.745337] DMAR: No ATSR found
[    0.745339] DMAR: No SATC found
[    0.745343] DMAR: dmar1: Using Queued invalidation
[    0.747597] DMAR: Intel(R) Virtualization Technology for Directed I/O

However another error now shows when starting the VM:

TASK ERROR: Cannot open iommu_group: No such file or directory

Not sure if this is better or worse but IOMMU looks enabled and pretty sure it looks all good in BIOS also with Intel Virtualization enabled and VT for Direct I/O enabled however cannot see any other settings in BIOS relating to this. Not sure on the VT-x BIT am hoping this just means intel virtualization in my BIOS.
 
Last edited:
Hello Moaya,

Below is my VM config:
Code:
agent: 1
boot: order=scsi0;net0
cores: 4
memory: 5000
meta: creation-qemu=6.2.0,ctime=1655842234
name: landahoy22
net0: virtio=A6:AC:35:18:07:E2,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: NVME:vm-102-disk-0,iothread=1,size=256G,ssd=1
scsi1: hdd-image:102/vm-102-disk-0.qcow2,iothread=1,size=6000G
scsihw: virtio-scsi-single
smbios1: uuid=3d757f14-1b32-4e3a-aa04-a46238235f42
sockets: 1
vmgenid: 3a7aa933-caa5-487d-b7d2-103fcd1c75d9
hostpci0: 0000:00:02.0

And my blacklist looks like below:
Code:
.# This file contains a list of modules which are not supported by Proxmox VE

# nidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701
blacklist nvidiafb
blacklist snd_hda_intel
blacklist snd_hda_codec_hdmi
blacklist i915
 
Last edited:
The VM using the GPU runs Kodi. I think it wouldn't work as LXC because of the Peripherials and mounting stuff. But the real Problem is that the Configuration runs without any problems till Kernel 5.11.22-7.
What happend after that Kernel version that this configuration doesn't work.
 
Hi,

Did you find a solution for this?
I’m running against the exact same problem.

Regards,
 
Till now, i found no solution. In my Proxmox configuration is still the kernel version 5.11.22-7 pinned.
Thanks for this, I have found myself in the same boat. So pinned the older kernel with:

Download the old kernel:
Code:
apt install pve-kernel 5.11.22-7-pve
test by using the old kernel on next boot only
Code:
proxmox-boot-tool kernel pin 5.11.22-7-pve --next-boot
it worked for me so the below command pins it
Code:
proxmox-boot-tool kernel pin 5.11.22-7-pve

I hope there is a solution soon, saw that there were some beta kernels but not tested these myself.
 
Thanks for this, I have found myself in the same boat. So pinned the older kernel with:

Download the old kernel:
Code:
apt install pve-kernel 5.11.22-7-pve
test by using the old kernel on next boot only
Code:
proxmox-boot-tool kernel pin 5.11.22-7-pve --next-boot
it worked for me so the below command pins it
Code:
proxmox-boot-tool kernel pin 5.11.22-7-pve

I hope there is a solution soon, saw that there were some beta kernels but not tested these myself.
Could you describe step by step how you run on Dell Wyse 5070 Thin client pass through iGPU on Intel UHD 605? Unfortunately, I think I messed something up and it doesn't work for me. I am using Proxmox 7 with kernel 5.11.22-1.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!