Troubles doing iGPU passthrough on my ubuntu VM

LECOQQ

New Member
Jun 21, 2024
4
0
1
Hello guys,

I've been trying to do a iGPU passthrough for a few days on my homelab to get HW transcoding on jellyfin. I feel like I've learned a lot but I still have a lot of issues that I hope someone can resolve, or help in any way. I've followed every proxmox ressources, and scoured this forum but didn't manage to find a working solution.

I'm trying with 2 separates set-ups, doing the same things and resulting in the same issues.
Let's focus on one set-up: i did a fresh install of Proxmox VE 8.3.3 on a HP Prodesk G400 mini (i5 7500T/12GB DDR4/256GB SSD).
I've enabled in my BIOS both Intel Virtualization features (VT-X & VTd).

My bootloader is Grub. To enabled iGPU passthrough, i've checked what devices I had:
Bash:
lspci -nn

Which resulted in

Code:
00:00.0 Host bridge [0600]: Intel Corporation Xeon E3-1200 v6/7th Gen Core Processor Host Bridge/DRAM Registers [8086:591f] (rev 05)
00:02.0 VGA compatible controller [0300]: Intel Corporation HD Graphics 630 [8086:5912] (rev 04)
00:14.0 USB controller [0c03]: Intel Corporation 200 Series/Z370 Chipset Family USB 3.0 xHCI Controller [8086:a2af]
00:14.2 Signal processing controller [1180]: Intel Corporation 200 Series PCH Thermal Subsystem [8086:a2b1]
00:16.0 Communication controller [0780]: Intel Corporation 200 Series PCH CSME HECI #1 [8086:a2ba]
00:17.0 SATA controller [0106]: Intel Corporation 200 Series PCH SATA controller [AHCI mode] [8086:a282]
00:1c.0 PCI bridge [0604]: Intel Corporation 200 Series PCH PCI Express Root Port #5 [8086:a294] (rev f0)
00:1f.0 ISA bridge [0601]: Intel Corporation 200 Series PCH LPC Controller (H270) [8086:a2c4]
00:1f.2 Memory controller [0580]: Intel Corporation 200 Series/Z370 Chipset Family Power Management Controller [8086:a2a1]
00:1f.3 Audio device [0403]: Intel Corporation 200 Series PCH HD Audio [8086:a2f0]
00:1f.4 SMBus [0c05]: Intel Corporation 200 Series/Z370 Chipset Family SMBus Controller [8086:a2a3]
01:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)

I then added to /etc/default/grub the following line:
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt textonly vfio_iommu_type1.allow_unsafe_interrupts=1 nofb nomodeset vfio-pci.ids=8086:5912 video=vesafb:off video=efifb:off video=simplefb:off"
And then did the following:
Bash:
update-initramfs -u
update-grub
reboot

After the reboot, I added in /etc/modules the following lines:
Code:
vfio
vfio_iommu_type_1
vfio_pci
then in /etc/modprobe.d/vfio.conf:
Code:
options vfio-pci ids=8086:5912 disable_vga=1
and finally in /etc/modprobe.d/blacklist.conf to ensure that the igpu will not be used by the node itself:
Code:
blacklist i915
I then rebooted. This is the moment I lost my VGA output to my portable screen, and used VNC to connect through the WebUI to my node.

To verify everything was great, I did:
Bash:
lsmod | grep vfio
Which resulted in:
Code:
vfio_pci               16384  0
vfio_pci_core          86016  1 vfio_pci
irqbypass              12288  2 vfio_pci_core,kvm
vfio_iommu_type1       49152  0
vfio                   65536  4 vfio_pci_core,vfio_iommu_type1,vfio_pci
iommufd                94208  1 vfio
Which means that the modules are taken into account, and then:
Bash:
dmesg | grep -e DMAR
Which resulted in:
Code:
[    0.010838] ACPI: DMAR 0x00000000C9FC2000 0000A8 (v01 INTEL  KBL      00000001 INTL 00000001)
[    0.010883] ACPI: Reserving DMAR table memory at [mem 0xc9fc2000-0xc9fc20a7]
[    0.029317] DMAR: IOMMU enabled
[    0.083997] DMAR: Host address width 39
[    0.083998] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.084009] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[    0.084012] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.084016] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.084018] DMAR: RMRR base: 0x000000c9c7a000 end: 0x000000c9c99fff
[    0.084021] DMAR: RMRR base: 0x000000cc000000 end: 0x000000ce7fffff
[    0.084023] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.084025] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.084026] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.085642] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.319389] DMAR: No ATSR found
[    0.319390] DMAR: No SATC found
[    0.319391] DMAR: IOMMU feature fl1gp_support inconsistent
[    0.319393] DMAR: IOMMU feature pgsel_inv inconsistent
[    0.319394] DMAR: IOMMU feature nwfs inconsistent
[    0.319395] DMAR: IOMMU feature pasid inconsistent
[    0.319396] DMAR: IOMMU feature eafs inconsistent
[    0.319397] DMAR: IOMMU feature prs inconsistent
[    0.319398] DMAR: IOMMU feature nest inconsistent
[    0.319399] DMAR: IOMMU feature mts inconsistent
[    0.319400] DMAR: IOMMU feature sc_support inconsistent
[    0.319401] DMAR: IOMMU feature dev_iotlb_support inconsistent
[    0.319403] DMAR: dmar0: Using Queued invalidation
[    0.319406] DMAR: dmar1: Using Queued invalidation
[    0.319847] DMAR: Intel(R) Virtualization Technology for Directed I/O
Which means that IOMMU is indeed on, and that remapping is also enabled.

With everything seemingly good, I created a new VM, with nothing fancy - a ubuntu server iso, a q35 machine, w/ ovmf and a host cpu. Here is the conf file:
Code:
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 4
cpu: host
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
ide2: local:iso/ubuntu-24.04-live-server-amd64.iso,media=cdrom,size=2690412K
machine: q35
memory: 8096
meta: creation-qemu=9.0.2,ctime=1737544844
name: lodi
net0: virtio=BC:24:11:A5:43:33,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-100-disk-1,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=f3edd4aa-2564-4cd3-933e-3f49c70fa349
sockets: 1
vmgenid: 575b2781-0007-425b-a9a3-e3bd12d06b71
I've installed ubuntu, went in the VM, and everything went great.
After this, I've shutdowned the VM and added the igpu pci device, by going in the web gui: <vm-id> -> hardware -> add -> PCI Device -> Raw Device -> ID:00:02.0 - Device HD Graphics 630.
When trying to start the VM, the WebUI tells me that the status is OK. Though when using the vnc to get into the console, I'm not greeted by a login console, but by the boot sequence, specifically blocked on:
Code:
snd_hda_intel: no codecs found!
After trying many times, nothing else happened. I could still connect into the VM by SSH. Is it normal that VNC is stuck on the boot sequence?
I've removed the PCI device, then started the VM again, and this time, it booted flawlessly. I added to /etc/modprobe.d/blacklist the following lines:
Code:
blacklist snd_hda_codec_hdmi
blacklist snd_hda_intel
Then I shutdowned the VM, added the PCI device, and got also stuck on boot sequence on my VNC, though this time I didn't have the snd_hda_intel: no codecs found!
When connecting through SSH to the VM, I tried to check if the iGPU passthrough worked:
Bash:
lspci | grep VGA
And indeed I had the Intel HD Graphics 630! I've went to /dev/dri, and I had the following:
Bash:
cd /dev/dri
ls
Code:
card0 renderD128
I felt like I was finally through. But when I went back to the proxmox ve shell, and typed:
Bash:
dmesg | grep -e DMAR
I got the following:
Code:
DMAR: [DMA READ NO_PASID] Request device [00:02.0] fault addr 0xsomeaddr [fault reason 0x06] PTE Read access is not set
And this keeps flashing like every minute.

So I'm wondering: what am I doing wrong?
Is it normal that the VNC isn't working when booting to a device with the iGPU?
Is the iGPU working and usable in my VM despite this DMAR error? What does it even means in the first place?
How to fix this DMAR error?

I hope some of you had this issue and could guide me there...
Have a great day ;)

Edit: formatting
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!