12700H iGPU Passthrough

subduplicate

Member
Dec 12, 2020
4
2
8
124
Hey all, I've been trying to pass the iGPU for this CPU to a VM (not CT) specifically for Jellyfin hardware encoding. I've tried every permutation of options I've stumbled across through the depths of page 10 search results and come up with nothing, so this is my plea for help.

The issue:
I recently acquired an Intel NUC 12 Enthusiast NUC12SNKi72 and after a doozy of a time getting Proxmox installed I'm finally ready to start moving some VMs from my older server, starting with my media stack.

The system:
  • i7-12700H
  • M45201-501 motherboard
  • 64GB DDR4
The install:
  • PVE 8.1.4
  • Kernels 6.5.13-1-pve and 6.5.11-8-pve
  • systemd-boot (ZFS)
Things I've tried:

It may actually be easier to list the things I haven't tried, but I'll stick with the closest to success I've gotten so far. Currently I have a Fedora 39 VM that's set up with the iGPU added as a PCI device with All Functions, ROM-Bar, and PCI-Express turned on. Inside the VM I can see `/dev/dri/card0` but not `/dev/dri/render128`. If the VM isn't running I see both of them on the Proxmox host (along with card1 and Render129. I'd prefer to get this running under Debian but even on an updated bookworm install with the same PCI Device added `/dev/dri` doesn't exist. I've tried installing the non-free driver in Debian to no avail and on Fedora I've tried RPM fusion but that didn't change anything notable.

Like I said I've gone pretty deep in search results and haven't found anything that's worked to get this fully set up in Debian or Fedora. I'm not really intereasted in the LXC route at this time either, which is the vast majority of what came up.

The only blacklisted modules I have are:
Bash:
:~# grep -ir blacklist /etc/modprobe.d/
/etc/modprobe.d/pve-blacklist.conf:blacklist nvidiafb
/etc/modprobe.d/intel-microcode-blacklist.conf:blacklist microcode

At this point I'm hoping I missed something silly and someone can straighten me out pretty quick. Any and all help would be super appreciated!
 
I tried to do this and I’m getting stuck just before the Secure Boot MOK Configuration step. When I reboot the system hangs at:

```
i915 0000:00:02.0: 7 VFs could be associated with this PF
i915 0000:03:00.0: [drm] VT-d active for gfx access
```

And I don’t even get the option to complete the MOK step. It’s happened to me on two kernels, 6.5.11-8-pve and 6.5.13-1-pve. I end up having to hold the power button to turn off, then when it boots gain it does so normally, but I don’t get video out past

```
EFI stub: Loaded initrd from LINUX_EFI_INITRD_MEDIA_GUID device path
EFI stub: Measured initrd data into PCR 9
```

but I can access the system via SSH again. I’m wondering if the video is just dying but the MOK screen is still waiting for input? I tried pressing enter twice then my root password to see if it would progress but nothing happened.

What are the exact key presses necessary from that part up to entering root password, and then what key presses are necessary to submit and complete the process?
 
What are the exact key presses necessary from that part up to entering root password, and then what key presses are necessary to submit and complete the process?
root Enter password Enter


in Proxmox WebGUI do you gave for Boot Mode = EFI (Secure Boot)?

On my mini pc secure boot is disabled.....
 
root Enter password Enter


in Proxmox WebGUI do you gave for Boot Mode = EFI (Secure Boot)?

On my mini pc secure boot is disabled.....
I tried that sequence and no dice.

No reference to secure boot in the GUI, and I've confirmed many times over that it's disabled in the BIOS.
1709431765852.png
 
Hey! I think I have a similar issue. Intel i7-12700T running proxmox, and trying to pass through vGPU to an Ubuntu VM.

I followed Derek Seaman's guide above (to the letter), and can see the GPU in the VM:

00:01.0 VGA compatible controller: Device 1234:1111 (rev 02)
06:1b.0 VGA compatible controller: Intel Corporation AlderLake-S GT1 (rev 0c)

...but Frigate inside the VM doesn't seem to recognise it. Also, like you - there is no render128 device.

I'd hoped that the 12th Gen CPUs were now old enough for these issues to have been resolved by smarter people than me before now but it seems not....
 
I don't think the use of the term "vgpu" when referring to igpu passthrough is appropriate since it will confuse some people. vgpu (nvidia) is a different enough concept from igpu passthrough and I would imagine mixing the terms (especially in web searches) might unintentionally cause the searcher to mix methods and steps between the two.

There is a way to "split" the igpu that resembles nvidias vgpu concept.

https://3os.org/infrastructure/proxmox/gpu-passthrough/igpu-split-passthrough/

I managed to get igpu split passthrough to work the one time I played with it. And if I recall correctly, it even worked in an lxc container.
 
Last edited:
From the link you posted: "iGPU GVT-g Split Passthrough is supported only on Intel's 5th generation to 10th generation CPUs!". We're running 12th gen CPUs, so AFAIK splitting the GPU won't work. With the newer CPUs, the iGPU is supposed to support SR-IOV virtualisation of the GPU, allowing it to be accessed by multiple VMs. The tutorial referred to above (https://www.derekseaman.com/2023/06...passthrough-with-intel-alder-lake-legacy.html) uses the term vGPU, hence why I used it here.
 
@bawjaws I get it on the vgpu (you referenced it from that link). I wasn't saying you were the one mixing the term. Was just saying that it does get mixed up and you were also following along. When I was searching for actual nvidia vgpu processes, I came across that same link and saw intel and literally got confused as well until I realized the vgpu term was used wrong in that context.

Also, I am a bit tired so I glossed over the 12th gen cpu. I remembered seeing this https://www.reddit.com/r/unRAID/comments/182b79m/intel_12th_gen_i512400_windows_10_vm_igpu_sriov/ a while back because I do have a 12th gen cpu (i7-12700) on one of my boxes but I haven't needed to play with the igpu on that one.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!