Alder Lake GVT-d integrated graphics passthrough

Coming back to report the success I had last night. I let this sit in my bucket of "wait for developers to fix" for quite some time. Last night I mustered up the strength to want to try again. It seems as though I have got it to work.

Proxmox Host Setup:

I am not sure how much of this is necessary, but I am not going to go back and remove stuff to see if it works, this is what it is, and its staying that way :)

Updated to 7.3-4

Grub command (/etc/default/grub):
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt video=vesafb:off video=efifb:off initcall_blacklist=sysfb_init"

my "/etc/modprob.d/blacklist.conf" file has this in it:
blacklist igb blacklist i915

my "/etc/modprob.d/vfio.conf" file has this in it:
options vfio-pci ids=8086:150e,8086:4680 disable_vga=1

Running kernel 5.15.83-1-pve

Ubuntu 22.04 LTS Guest:
Kernel:
5.15.0-56-generic

Grub command (/etc/default/grub):
GRUB_CMDLINE_LINUX_DEFAULT="i915.force_probe=4680 i915.enable_gvt=1"

Built and installed these three repos, but they might not be necessary:
Another interesting thread: https://github.com/intel/media-driver/issues/1371

-----------------------------------------------------------------------------------------------------
After doing that, I transferred my plexmediaserver directory from the old plex machine to the new one, enabled hardware encoding in settings and started multiple streams for an extended period of time. No crashes, kernel panics or other instability. Just very little CPU usage, and minimal power consumption:

3x Transcoding +1 remote transcode (Medium).png

and 1 transcode on my phone at the same time (weird clouds at the bottom are the PIP transcode intro to a movie), also showing the servers total power consumption (it runs more than just plex):
phone transcode with power consumption (Medium).PNG

Edit: Updated guest OS kernel and version.
 
Last edited:
  • Like
Reactions: Tmanok
Coming back to report the success I had last night. I let this sit in my bucket of "wait for developers to fix" for quite some time. Last night I mustered up the strength to want to try again. It seems as though I have got it to work.

Proxmox Host Setup:

I am not sure how much of this is necessary, but I am not going to go back and remove stuff to see if it works, this is what it is, and its staying that way :)

Updated to 7.3-4

Grub command (/etc/default/grub):
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt video=vesafb:off video=efifb:off initcall_blacklist=sysfb_init"

my "/etc/modprob.d/blacklist.conf" file has this in it:
blacklist igb blacklist i915

my "/etc/modprob.d/vfio.conf" file has this in it:
options vfio-pci ids=8086:150e,8086:4680 disable_vga=1

Running kernel 5.15.83-1-pve

Ubuntu 22.04 LTS Guest:
Kernel:
5.15.0-56-generic

Grub command (/etc/default/grub):
GRUB_CMDLINE_LINUX_DEFAULT="i915.force_probe=4680 i915.enable_gvt=1"

Built and installed these three repos, but they might not be necessary:
Another interesting thread: https://github.com/intel/media-driver/issues/1371

-----------------------------------------------------------------------------------------------------
After doing that, I transferred my plexmediaserver directory from the old plex machine to the new one, enabled hardware encoding in settings and started multiple streams for an extended period of time. No crashes, kernel panics or other instability. Just very little CPU usage, and minimal power consumption:

View attachment 44914

and 1 transcode on my phone at the same time (weird clouds at the bottom are the PIP transcode intro to a movie), also showing the servers total power consumption (it runs more than just plex):
View attachment 44915

Edit: Updated guest OS kernel and version.
Hello, could you please share your VM config?
Also, is there any chance to test this on a Windows guest?
 
Hello, could you please share your VM config?
Also, is there any chance to test this on a Windows guest?
The only special configuration options were passing through the correct PCIE device, OVMF Bios, Q35 machine type, and display none.
Hardware Setup.png

Edit: Follow up, this virtual machine has been running without a reboot since I made the post about it working. It is very stable and hardware transcoding is working very well.

Edit: updated the importance of having display being set to none.
 
Last edited:
The only special configuration options were passing through the correct PCIE device, OVMF Bios, and Q35 machine type.


Edit: Follow up, this virtual machine has been running without a reboot since I made the post about it working. It is very stable and hardware transcoding is working very well.

It also works for me on a Linux guest, however it is highly unstable. I.e. after some time - or when jumping around in videos - DMESG will report ecode failures (GPU crashes). Even before that, when running memtester (a userspace memory test utility) inside the VM, as long as the PCI device is attached, it reports that there are memory errors.

In fact, it only looks like it works at first glance.
 
Last edited:
It also works for me on a Linux guest, however it is highly unstable. I.e. after some time - or when jumping around in videos - DMESG will report ecode failures (GPU crashes). Even before that, when running memtester (a userspace memory test utility) inside the VM, as long as the PCI device is attached, it reports that there are memory errors.

In fact, it only looks like it works at first glance.
It’s really not working for you.

I just checked the uptime on my Plex server and it has been running for 18 days with the setup I described above and hardware transcoding has been functional. Whereas before my updates with hardware transcoding enabled, the server might run for 15 minutes without completely locking up the host and taking down my entire network (I virtualize pfSense on the same host). I skip around videos and the kids skip around videos and we have yet to incur an issue. I have 2 external users that have been binge watching 2 different shows and haven’t heard any complaints since the update. I regularly see their streams hardware transcoding.

Did you apply all of the settings and build and install all of the projects I mentioned above? Are you running the same kernel version? What about your OS version? And your grub options? What are your decode failure messages?
 
I just checked the uptime on my Plex server and it has been running for 18 days with the setup I described above and hardware transcoding has been functional.

Is your dmesg inside the VM empty of error messages? That is really the important thing. I also had it running for ~half a month uptime without noticing anything. Error messages look like this: [drm] GPU HANG: ecode XXX

Actually when you google after that, there are a lot of messages reporting problems with alder/raptor lake:
https://quinncasey.com/raptor-lake-13th-gen-issues/
https://forum.proxmox.com/threads/igpu-passthrough-cause-memory-failure-in-linux-vm.117948/


Did you apply all of the settings and build and install all of the projects I mentioned above?

Settings are basically the same. UEFI, Q35, PCIE Passthrough

Are you running the same kernel version? What about your OS version?

I basically tried it with a lot of kernel versions. Both on the HV and the host.
I even self-compiled and installed the latest drm-tip kernel (this is the kernel with the most recent Intel drivers) and intel-media-driver userspace driver. All leading to the same results.

However what confuses me is your grub config on the VM: enable_gvt is used for GPU virtualization (GVT-g and not full passthrough)
 
Last edited:
Actually when you google after that, there are a lot of messages reporting problems with alder/raptor lake:
https://quinncasey.com/raptor-lake-13th-gen-issues/
https://forum.proxmox.com/threads/igpu-passthrough-cause-memory-failure-in-linux-vm.117948/
Both of those setups are different than mine, one is a 13th gen raptor lake, and the other is using a bleeding edge kernel version (5.19xxx)

Is your dmesg inside the VM empty of error messages?
No error messages on my VM, running
Code:
sudo dmesg | grep HUNG
yields no results. Also manually searching for any error message at all and only came across an error communicating with one of my SMB shares, which apparently didn’t matter because the VM can communicate with that share still.

If you have messed around that much with the kernel versions and saw no results, maybe you should look at compiling the newest versions of the libraries I mentioned:

———————————————
However what confuses me is your grub config on the VM: enable_gvt is used for GPU virtualization (GVT-g and not full passthrough)
I might consider removing this parameter to test if it’s necessary, but only if I need to restart the server, but until then it’s working exceptionally well and I’m happy. Don’t fix what isn’t broken!
 
Coming back to report the success I had last night. I let this sit in my bucket of "wait for developers to fix" for quite some time. Last night I mustered up the strength to want to try again. It seems as though I have got it to work.

Proxmox Host Setup:

I am not sure how much of this is necessary, but I am not going to go back and remove stuff to see if it works, this is what it is, and its staying that way :)

Updated to 7.3-4

Grub command (/etc/default/grub):
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt video=vesafb:off video=efifb:off initcall_blacklist=sysfb_init"

my "/etc/modprob.d/blacklist.conf" file has this in it:
blacklist igb blacklist i915

my "/etc/modprob.d/vfio.conf" file has this in it:
options vfio-pci ids=8086:150e,8086:4680 disable_vga=1

Running kernel 5.15.83-1-pve

Ubuntu 22.04 LTS Guest:
Kernel:
5.15.0-56-generic

Grub command (/etc/default/grub):
GRUB_CMDLINE_LINUX_DEFAULT="i915.force_probe=4680 i915.enable_gvt=1"

Built and installed these three repos, but they might not be necessary:
Another interesting thread: https://github.com/intel/media-driver/issues/1371

-----------------------------------------------------------------------------------------------------
After doing that, I transferred my plexmediaserver directory from the old plex machine to the new one, enabled hardware encoding in settings and started multiple streams for an extended period of time. No crashes, kernel panics or other instability. Just very little CPU usage, and minimal power consumption:

View attachment 44914

and 1 transcode on my phone at the same time (weird clouds at the bottom are the PIP transcode intro to a movie), also showing the servers total power consumption (it runs more than just plex):
View attachment 44915

Edit: Updated guest OS kernel and version.

This helped get me working. I haven't tested Plex yet but this is the first time I have got the q35, UEFI and PCI Express settings to work and have the VM boot. I also set the display to None which I hadn't done before. CPU is Intel(R) Pentium(R) Gold 7505.

Same PVE revision and Kernel, not sure what the VM kernel is but it's Ubuntu 22.04.

Currently have Frigate running within a docker and using the QSV hardware acceleration and showing 1% usage on the iGPU

Hopefully get Plex installed in the next few days and see if it works. Last time I tried with the BIOS and i440fx it would work but crash after a couple of streams started and the VM would need a reboot to function again. Fingers crossed it works this time and I can move Plex off my old server which is doing CPU transcoding.
 
This helped get me working. I haven't tested Plex yet but this is the first time I have got the q35, UEFI and PCI Express settings to work and have the VM boot. I also set the display to None which I hadn't done before. CPU is Intel(R) Pentium(R) Gold 7505.

Same PVE revision and Kernel, not sure what the VM kernel is but it's Ubuntu 22.04.

Currently have Frigate running within a docker and using the QSV hardware acceleration and showing 1% usage on the iGPU

Hopefully get Plex installed in the next few days and see if it works. Last time I tried with the BIOS and i440fx it would work but crash after a couple of streams started and the VM would need a reboot to function again. Fingers crossed it works this time and I can move Plex off my old server which is doing CPU transcoding.
That’s great to hear. Let me know how the Plex transcoding works for you. Mine is still going great!
 
Is your dmesg inside the VM empty of error messages? That is really the important thing. I also had it running for ~half a month uptime without noticing anything. Error messages look like this: [drm] GPU HANG: ecode XXX

Actually when you google after that, there are a lot of messages reporting problems with alder/raptor lake:
https://quinncasey.com/raptor-lake-13th-gen-issues/
https://forum.proxmox.com/threads/igpu-passthrough-cause-memory-failure-in-linux-vm.117948/




Settings are basically the same. UEFI, Q35, PCIE Passthrough



I basically tried it with a lot of kernel versions. Both on the HV and the host.
I even self-compiled and installed the latest drm-tip kernel (this is the kernel with the most recent Intel drivers) and intel-media-driver userspace driver. All leading to the same results.

However what confuses me is your grub config on the VM: enable_gvt is used for GPU virtualization (GVT-g and not full passthrough)

I'm also getting GPU hangs when using HW transcoding with an i7-1260P (12th gen NUC) when passing through to a Ubuntu 22.04 VM.

I've tried the 5.15 and 6.1 kernels on the hypervisor and it has the same issue. Here is /var/log/syslog on my 22.04 VM running plex:

Feb 5 22:49:22 plex kernel: [ 156.582690] i915 0000:00:10.0: [drm] GPU HANG: ecode 12:0:00000000
Feb 5 22:49:22 plex kernel: [ 156.635002] i915 0000:00:10.0: [drm] GPU HANG: ecode 12:0:00000000
Feb 5 22:49:22 plex kernel: [ 157.277287] i915 0000:00:10.0: [drm] GPU HANG: ecode 12:0:00000000
Feb 5 22:49:24 plex kernel: [ 158.758832] i915 0000:00:10.0: [drm] GPU HANG: ecode 12:4:00000000, in Plex Transcoder [1987]
Feb 5 22:49:24 plex kernel: [ 158.758861] i915 0000:00:10.0: [drm] Plex Transcoder[1987] context reset due to GPU hang
Feb 5 22:49:27 plex kernel: [ 161.560506] i915 0000:00:10.0: [drm] GPU HANG: ecode 12:0:00000000
Feb 5 22:49:29 plex kernel: [ 163.545443] i915 0000:00:10.0: [drm] GPU HANG: ecode 12:4:00000000, in Plex Transcoder [1987]
Feb 5 22:49:29 plex kernel: [ 163.545459] i915 0000:00:10.0: [drm] Plex Transcoder[1987] context reset due to GPU hang
Feb 5 22:49:29 plex kernel: [ 163.578668] i915 0000:00:10.0: [drm] GPU HANG: ecode 12:0:00000000
Feb 5 22:49:31 plex kernel: [ 165.549118] i915 0000:00:10.0: [drm] GPU HANG: ecode 12:0:00000000
Feb 5 22:49:32 plex kernel: [ 166.953092] i915 0000:00:10.0: [drm] GPU HANG: ecode 12:0:00000000
Feb 5 22:49:32 plex kernel: [ 166.953298] i915 0000:00:10.0: [drm] Resetting chip for stopped heartbeat on rcs0
Feb 5 22:49:32 plex kernel: [ 167.261183] i915 0000:00:10.0: [drm] GuC firmware i915/adlp_guc_70.1.1.bin version 70.1
Feb 5 22:49:32 plex kernel: [ 167.261189] i915 0000:00:10.0: [drm] HuC firmware i915/tgl_huc_7.9.3.bin version 7.9
Feb 5 22:49:32 plex kernel: [ 167.276134] i915 0000:00:10.0: [drm] HuC authenticated
Feb 5 22:49:32 plex kernel: [ 167.277129] i915 0000:00:10.0: [drm] GuC submission enabled
Feb 5 22:49:32 plex kernel: [ 167.277131] i915 0000:00:10.0: [drm] GuC SLPC enabled

Here is my /etc/kernel/cmdline from pve:

root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on i915.enable_gvt=1 pcie_aspm=off iommu=pt video=vesafb:eek:ff video=efifb:eek:ff initcall_blacklist=sysfb_init

Here is /etc/modules
# Modules required for PCI passthrough
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

# Modules required for Intel GVT
kvmgt
xengt
vfio-mdev

I have a blacklist file in /etc/modprobe.d/blacklist.conf:

blacklist igb
blacklist i915
 
I'm also getting GPU hangs when using HW transcoding with an i7-1260P (12th gen NUC) when passing through to a Ubuntu 22.04 VM.

I've tried the 5.15 and 6.1 kernels on the hypervisor and it has the same issue. Here is /var/log/syslog on my 22.04 VM running plex:



Here is my /etc/kernel/cmdline from pve:



Here is /etc/modules


I have a blacklist file in /etc/modprobe.d/blacklist.conf:
It looks like you have the i915 drivers enabled on your host with your kernel Cmdline arguments. The host cannot have these drivers enabled otherwise you will see issues. There are two grub configurations, one for your host, and one for your guest. Make sure you have them setup according to my post here: https://forum.proxmox.com/threads/a...rated-graphics-passthrough.105983/post-521634

Also make sure you guest VM has the display set to none and has the PCIE device passed through to it. You will not be able to use the VNC console from the PVE host. You will have to SSH into the VM from here on out.
 
Last edited:
It looks like you have the i915 drivers enabled on your host with your kernel Cmdline arguments. The host cannot have these drivers enabled otherwise you will see issues. There are two grub configurations, one for your host, and one for your guest. Make sure you have them setup according to my post here: https://forum.proxmox.com/threads/a...rated-graphics-passthrough.105983/post-521634

Also make sure you guest VM has the display set to none and has the PCIE device passed through to it. You will not be able to use the VNC console from the PVE host. You will have to SSH into the VM from here on out.
I swapped back to my previous 8th gen NUC that is working without issue. I'll reinstall proxmox standalone on the 12th gen NUC and test on a non-prod Plex install so my users can still use it.
 
It looks like you have the i915 drivers enabled on your host with your kernel Cmdline arguments. The host cannot have these drivers enabled otherwise you will see issues. There are two grub configurations, one for your host, and one for your guest. Make sure you have them setup according to my post here: https://forum.proxmox.com/threads/a...rated-graphics-passthrough.105983/post-521634

Also make sure you guest VM has the display set to none and has the PCIE device passed through to it. You will not be able to use the VNC console from the PVE host. You will have to SSH into the VM from here on out.
I redid my proxmox install following your setup to the T. I'm getting the very same GPU hangs I had earlier. Something is clearly wrong with either the i915 driver, VA-API, or IOMMU with Intel 12th gen. Looks like i'll be promptly sending this turd of a NUC back.

I suspect you aren't stressing your server enough to run into this issue or you aren't decoding/encoding the particular codec that is causing grief. My stress test involves 6 streams with a mix of 1080 (H264, HEVC) and 4k (HEVC) transcoding down to 720-1080 (H264, HEVC).

Feb 7 05:17:02 plex-test kernel: [ 1127.613373] i915 0000:01:00.0: [drm] GPU HANG: ecode 12:0:00000000
Feb 7 05:17:08 plex-test kernel: [ 1133.757653] i915 0000:01:00.0: [drm] GPU HANG: ecode 12:0:00000000
Feb 7 05:17:10 plex-test kernel: [ 1135.932613] i915 0000:01:00.0: [drm] GPU HANG: ecode 12:0:00000000
Feb 7 05:17:10 plex-test kernel: [ 1135.932680] i915 0000:01:00.0: [drm] Resetting chip for stopped heartbeat on rcs0
Feb 7 05:17:10 plex-test kernel: [ 1135.936341] i915 0000:01:00.0: [drm] GuC firmware i915/adlp_guc_62.0.3.bin version 62.0 submission:enabled
Feb 7 05:17:10 plex-test kernel: [ 1135.936344] i915 0000:01:00.0: [drm] GuC SLPC: enabled
Feb 7 05:17:10 plex-test kernel: [ 1135.936344] i915 0000:01:00.0: [drm] HuC firmware i915/tgl_huc_7.9.3.bin version 7.9 authenticated:yes
Feb 7 05:17:17 plex-test kernel: [ 1142.718548] i915 0000:01:00.0: [drm] GPU HANG: ecode 12:0:00000000
Feb 7 05:17:23 plex-test kernel: [ 1148.608516] i915 0000:01:00.0: [drm] GPU HANG: ecode 12:0:00000000
Feb 7 05:17:25 plex-test kernel: [ 1151.036806] i915 0000:01:00.0: [drm] GPU HANG: ecode 12:0:00000000
Feb 7 05:17:25 plex-test kernel: [ 1151.036862] i915 0000:01:00.0: [drm] Resetting chip for stopped heartbeat on rcs0
Feb 7 05:17:25 plex-test kernel: [ 1151.040280] i915 0000:01:00.0: [drm] GuC firmware i915/adlp_guc_62.0.3.bin version 62.0 submission:enabled
Feb 7 05:17:25 plex-test kernel: [ 1151.040283] i915 0000:01:00.0: [drm] GuC SLPC: enabled
Feb 7 05:17:25 plex-test kernel: [ 1151.040284] i915 0000:01:00.0: [drm] HuC firmware i915/tgl_huc_7.9.3.bin version 7.9 authenticated:yes
Feb 7 05:17:32 plex-test kernel: [ 1157.564777] i915 0000:01:00.0: [drm] GPU HANG: ecode 12:0:00000000
Feb 7 05:17:38 plex-test kernel: [ 1163.708794] i915 0000:01:00.0: [drm] GPU HANG: ecode 12:0:00000000
Feb 7 05:17:40 plex-test kernel: [ 1166.140155] i915 0000:01:00.0: [drm] GPU HANG: ecode 12:0:00000000
Feb 7 05:17:40 plex-test kernel: [ 1166.140212] i915 0000:01:00.0: [drm] Resetting chip for stopped heartbeat on rcs0
Feb 7 05:17:40 plex-test kernel: [ 1166.144242] i915 0000:01:00.0: [drm] GuC firmware i915/adlp_guc_62.0.3.bin version 62.0 submission:enabled
Feb 7 05:17:40 plex-test kernel: [ 1166.144244] i915 0000:01:00.0: [drm] GuC SLPC: enabled
Feb 7 05:17:40 plex-test kernel: [ 1166.144245] i915 0000:01:00.0: [drm] HuC firmware i915/tgl_huc_7.9.3.bin version 7.9 authenticated:yes
 
I redid my proxmox install following your setup to the T. I'm getting the very same GPU hangs I had earlier. Something is clearly wrong with either the i915 driver, VA-API, or IOMMU with Intel 12th gen. Looks like i'll be promptly sending this turd of a NUC back.

I suspect you aren't stressing your server enough to run into this issue or you aren't decoding/encoding the particular codec that is causing grief. My stress test involves 6 streams with a mix of 1080 (H264, HEVC) and 4k (HEVC) transcoding down to 720-1080 (H264, HEVC).
If you can return the NUC, get your money back.

I’ll do your stress test to see what’s going on from my end.

Did you compile and install the three drivers I mentioned? They happen to relate to exactly what you think is wrong, VAAPI being one of them.
 
If you can return the NUC, get your money back.

I’ll do your stress test to see what’s going on from my end.

Did you compile and install the three drivers I mentioned? They happen to relate to exactly what you think is wrong, VAAPI being one of them.

I switched to using LXC for Plex and bypassing the entire IOMMU/passthrough process altogether. It works perfectly now with much less configuration. I really do suspect that IOMMU is to blame as that's really the only notable difference in my setup.
 
  • Like
Reactions: Tmanok and vctgomes
I just started 6 streams locally while some other users were already streaming. A few 4K HEVC transcoded down to 720p and 1080p and one 1080p h264 transcoded down to 720p. The remote streams were direct playing at the time. I let these play for 10-20 minutes and saw no issues. I also attached grep output from dmesg so you can see I didn't have any GPU hang issues. Did you follow the guides about setting up IOMMU on the official Proxmox GPU passthrough page?
IYyD2Lr.png
Grep output.png
 
I believe I had yes. An interesting difference between your setup and my setup is that you have GuC submission disabled and mine had it enabled.
 
I'm also getting GPU hangs when using HW transcoding with an i7-1260P (12th gen NUC) when passing through to a Ubuntu 22.04 VM.
I also did some more very extensive tests.

I noticed in fact it also happens when not using passthrough at all, but directly when transcoding on the host.
I also talked with other owner of 12th gen and 13th gen iGPUs. It also happens for others. It seems that the i915 driver is still just unstable.

The problem with this is, that it only looks on the first looks as working. Sometimes the hangs only show when doing very extensive testing with multiple streams. Sometimes it seems that it can even run for months without issues, when only some transcoding is done. So its hard to tell if the driver has problems for all, or if specific configurations are responsible for that (ie BIOS or mainboard).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!