Intel NPU Driver / Passthrough

gardeningevenings

New Member
Feb 18, 2025
5
0
1
Hi,

I have an application that can make use of the NPU in the new Core Ultra series I have in my PVE host, the app will be running in Ubuntu and Intel have drivers ready to install on the VM.

I am not sure how to pass it through to the VM, it does not appear in the list of devices in the PCI mapping screen.

I think I might have to install the drivers on the host first? - it seems that you can get the apt packages from Intel if you sign up to their SDK programme: https://amrdocs.intel.com/docs/2.2/gsg_robot/install-npu-driver.html

Is there anyone else who has done this or would have any advice please?

Thanks

Oli
 
any advice
I never did passthrough with Intel, but it could be possible that with these newer gen GPUs you have to provide driver on the host that can split GPU into 2 or 4 "mediated devices". One of the mediated devices you would pick then for passthrough. This gives you acceleration inside VM, but no VGA output.

If you want VGA output from VM, you have to passthrough the whole device and no, with this you don't need a driver on the host.
does not appear in the list of devices in the PCI mapping screen.
Then you need to activate IOMMU + VT-d resp. AMD-Vi in BIOS.
https://pve.proxmox.com/wiki/PCI(e)_Passthrough
 
I never did passthrough with Intel, but it could be possible that with these newer gen GPUs you have to provide driver on the host that can split GPU into 2 or 4 "mediated devices". One of the mediated devices you would pick then for passthrough. This gives you acceleration inside VM, but no VGA output.

If you want VGA output from VM, you have to passthrough the whole device and no, with this you don't need a driver on the host.

Then you need to activate IOMMU + VT-d resp. AMD-Vi in BIOS.
https://pve.proxmox.com/wiki/PCI(e)_Passthrough
Thank you.

Think it’s worth saying that although the concepts are similar, the NPU is not really a type of GPU? (At least, to the best of my knowledge) but it is on the chip and so I suppose similar approach applies. I do not need the VM’s to have VGA output etc, this is all for backend processing.

So when I look at mapping a PCI device by adding it, I do see a list of items in the drop down (raw device) list - including TB4 controllers etc, but I do not see anything that relates to the NPU.

I checked before installing and IOMMU etc were all enabled in the BIOS, I think the fact I can see some devices to map suggests pass through is working to some extent.
 
the NPU is not really a type of GPU?
Doesn't really make a difference, GPU with or without physical VGA-connector inside VM is just some accelerator...opencl, vulkan, CUDA, tensorflow, pytorch...

I don't have experience with NPUs yet, it is possible they're bound to the GPU, even with split IOMMU-groups.

This is lspci from Ryzen 8600G, it should have NPU:
Code:
00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14e8
00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Device 14e9
00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14ea
00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ed
00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ed
00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14ea
00:02.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ee
00:02.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14ee
00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14ea
00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14ea
00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14ea
00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14eb
00:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14eb
00:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 14eb
00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 71)
00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14f0
00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14f1
00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14f2
00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14f3
00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14f4
00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14f5
00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14f6
00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 14f7
01:00.0 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)
02:00.0 Non-Volatile memory controller: Micron Technology Inc 7450 PRO NVMe SSD (rev 01)
03:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Upstream Port (rev 01)
04:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Downstream Port (rev 01)
04:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Downstream Port (rev 01)
04:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Downstream Port (rev 01)
04:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Downstream Port (rev 01)
04:0c.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Downstream Port (rev 01)
04:0d.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset PCIe Switch Downstream Port (rev 01)
05:00.0 Ethernet controller: Solarflare Communications SFC9120 10G Ethernet Controller (rev 01)
05:00.1 Ethernet controller: Solarflare Communications SFC9120 10G Ethernet Controller (rev 01)
08:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05)
09:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset USB 3.2 Controller (rev 01)
0a:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] 600 Series Chipset SATA Controller (rev 01)
0b:00.0 Non-Volatile memory controller: Seagate Technology PLC FireCuda 530 SSD (rev 01)
0c:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Phoenix1 (rev 05)
0c:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Rembrandt Radeon High Definition Audio Controller
0c:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 19h (Model 74h) CCP/PSP 3.0 Device
0c:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Device 15b9
0c:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Device 15ba
0c:00.6 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h/19h HD Audio Controller
0d:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 14ec
0d:00.1 Signal processing controller: Advanced Micro Devices, Inc. [AMD] AMD IPU Device
0e:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 14ec
0e:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Device 15c0
0e:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Device 15c1

My best guess would be
0c:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 19h (Model 74h) CCP/PSP 3.0 Device
or
0d:00.1 Signal processing controller: Advanced Micro Devices, Inc. [AMD] AMD IPU Device
 
Last edited:
OK I have made some progress *I think* I worked out the PCI ID was 00:0b.0 and passed that through.

I can now see the PCI device within the VM, however after installing the driver within the VM I’m now getting this

1.615869] intel_vpu 0000:00:10.0: [drm] Firmware: intel/vpu/vpu_37xx_v0.0.bin, version: 20230726*MTL_CLIENT_SILICON-release*2101*ci_tag_mtl_pv_vpu_rc_20230726_2101*648a666b8b9
[ 2.670924] intel_vpu 0000:00:10.0: [drm] *ERROR* ivpu_boot(): Failed to boot the firmware: -110
[ 2.671105] intel_vpu 0000:00:10.0: [drm] *ERROR* ivpu_mmu_dump_event(): MMU EVTQ: 0x10 (Translation fault) SSID: 0 SID: 3, e[2] 00000000, e[3] 00000208, in addr: 0x84803000, fetch addr: 0x0
[ 2.671382] intel_vpu 0000:00:10.0: [drm] *ERROR* ivpu_mmu_dump_event(): MMU EVTQ: 0x10 (Translation fault) SSID: 0 SID: 3, e[2] 00000000, e[3] 00000208, in addr: 0x84803010, fetch addr: 0x0
[ 2.675752] intel_vpu 0000:00:10.0: [drm] ivpu_hw_37xx_power_down(): VPU not idle during power down
[ 2.676306] intel_vpu: probe of 0000:00:10.0 failed with error -110
[ 124.667316] intel_vpu 0000:00:10.0: [drm] Firmware: intel/vpu/vpu_37xx_v0.0.bin, version: 20241025*MTL_CLIENT_SILICON-release*1830*ci_tag_ud202444_vpu_rc_20241025_1830*ae072b315bc
[ 125.670862] intel_vpu 0000:00:10.0: [drm] *ERROR* ivpu_boot(): Failed to boot the firmware: -110
[ 125.670889] intel_vpu 0000:00:10.0: [drm] *ERROR* ivpu_mmu_dump_event(): MMU EVTQ: 0x10 (Translation fault) SSID: 0 SID: 3, e[2] 00000000, e[3] 00000208, in addr: 0x84803000, fetch addr: 0x0
[ 125.670902] intel_vpu 0000:00:10.0: [drm] *ERROR* ivpu_mmu_dump_event(): MMU EVTQ: 0x10 (Translation fault) SSID: 0 SID: 3, e[2] 00000000, e[3] 00000208, in addr: 0x84803010, fetch addr: 0x0
[ 125.681336] intel_vpu 0000:00:10.0: [drm] ivpu_hw_37xx_power_down(): VPU not idle during power down
[ 125.683518] intel_vpu: probe of 0000:00:10.0 failed with error -110

(Sorry code option doesn’t work on iOS)

I can write up what I did to get this far, but just want to troubleshoot the rest of it first….
 
Mhm, maybe there is something more in BIOS that wants to be enabled. Is this path existent and does the file exist there? intel/vpu/vpu_37xx_v0.0.bin wherever that is.

I worked out the PCI ID was 00:0b.0 and passed that through.
Pls show complete output of lspci from host and VM
 
Mhm, maybe there is something more in BIOS that wants to be enabled. Is this path existent and does the file exist there? intel/vpu/vpu_37xx_v0.0.bin wherever that is.


Pls show complete output of lspci from host and VM
OK So it's still not working but I have

1. Gone back into BIOS and check all IOMMU and VT etc settings, it's all showing as enabled and working

2. I've been through the PCI passthrough wiki page https://pve.proxmox.com/wiki/PCI_Passthrough#Verifying_IOMMU_parameters and the output from
Code:
dmesg | grep -e DMAR -e IOMMU
shows
Code:
DMAR-IR: Enabled IRQ remapping in x2apic mode
which makes sense, as I can also use
Code:
pvesh get /nodes/pve1/hardware/pci --pci-class-blacklist ""
to list all the PCI devices.

3. Output from lspci from host is:
Code:
00:00.0 Host bridge: Intel Corporation Device 7d2a (rev 01)
00:01.0 PCI bridge: Intel Corporation Device 7ecc (rev 10)
00:02.0 VGA compatible controller: Intel Corporation Arrow Lake-S [Intel Graphics] (rev 06)
00:04.0 Signal processing controller: Intel Corporation Device ad03 (rev 01)
00:07.0 PCI bridge: Intel Corporation Meteor Lake-P Thunderbolt 4 PCI Express Root Port #0 (rev 10)
00:07.1 PCI bridge: Intel Corporation Meteor Lake-P Thunderbolt 4 PCI Express Root Port #1 (rev 10)
00:08.0 System peripheral: Intel Corporation Device ae4c (rev 10)
00:0a.0 Signal processing controller: Intel Corporation Device ad0d (rev 01)
00:0b.0 Processing accelerators: Intel Corporation Arrow Lake NPU (rev 01)
00:0d.0 USB controller: Intel Corporation Meteor Lake-P Thunderbolt 4 USB Controller (rev 10)
00:0d.2 USB controller: Intel Corporation Meteor Lake-P Thunderbolt 4 NHI #0 (rev 10)
00:14.0 RAM memory: Intel Corporation Device ae7f (rev 10)
00:1f.0 ISA bridge: Intel Corporation Device ae0d (rev 10)
00:1f.5 Serial bus controller: Intel Corporation Device ae23 (rev 10)
01:00.0 Non-Volatile memory controller: Micron/Crucial Technology P3 Plus NVMe PCIe SSD (DRAM-less) (rev 01)
80:14.0 USB controller: Intel Corporation Device 7f6e (rev 10)
80:14.5 Non-VGA unclassified device: Intel Corporation Device 7f2f (rev 10)
80:15.0 Serial bus controller: Intel Corporation Device 7f4c (rev 10)
80:16.0 Communication controller: Intel Corporation Device 7f68 (rev 10)
80:17.0 SATA controller: Intel Corporation Device 7f62 (rev 10)
80:1b.0 PCI bridge: Intel Corporation Device 7f44 (rev 10)
80:1c.0 PCI bridge: Intel Corporation Device 7f38 (rev 10)
80:1c.1 PCI bridge: Intel Corporation Device 7f39 (rev 10)
80:1c.2 PCI bridge: Intel Corporation Device 7f3a (rev 10)
80:1c.4 PCI bridge: Intel Corporation Device 7f3c (rev 10)
80:1f.0 ISA bridge: Intel Corporation Device 7f06 (rev 10)
80:1f.3 Audio device: Intel Corporation Device 7f50 (rev 10)
80:1f.4 SMBus: Intel Corporation Device 7f23 (rev 10)
80:1f.5 Serial bus controller: Intel Corporation Device 7f24 (rev 10)
81:00.0 Non-Volatile memory controller: Micron/Crucial Technology P3 Plus NVMe PCIe SSD (DRAM-less) (rev 01)
82:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
82:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
83:00.0 Non-Volatile memory controller: Micron/Crucial Technology P3 Plus NVMe PCIe SSD (DRAM-less) (rev 01)
84:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. Device 8126 (rev 01)
85:00.0 SATA controller: ASMedia Technology Inc. ASM1166 Serial ATA Controller (rev 02)

4. Output from lspci from vm is

Code:
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Device 1234:1111 (rev 02)
00:07.0 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)
00:08.0 Communication controller: Red Hat, Inc. Virtio console
00:0a.0 SCSI storage controller: Red Hat, Inc. Virtio block device
00:10.0 Processing accelerators: Intel Corporation Arrow Lake NPU (rev 01)
00:12.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 03)
00:1e.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
00:1f.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge

5. I also realised that the release of Ubuntu I was running was not supported, so I've now upgrade to 24.04, which shows as supported for the driver
Code:
https://github.com/canonical/intel-npu-driver-snap/blob/main/README.md

6. Still seeing the error on loading the driver:
Code:
[    1.624587] intel_vpu 0000:00:10.0: [drm] Firmware: intel/vpu/vpu_37xx_v0.0.bin, version: 20250115*MTL_CLIENT_SILICON-release*1905*ci_tag_ud202504_vpu_rc_20250115_1905*ae83b65d01c
[    2.699943] intel_vpu 0000:00:10.0: [drm] *ERROR* ivpu_boot(): Failed to boot the firmware: -110
[    2.700106] intel_vpu 0000:00:10.0: [drm] *ERROR* ivpu_mmu_dump_event(): MMU EVTQ: 0x10 (Translation fault) SSID: 0 SID: 3, e[2] 00000000, e[3] 00000208, in addr: 0x84803000, fetch addr: 0x0
[    2.700369] intel_vpu 0000:00:10.0: [drm] *ERROR* ivpu_mmu_dump_event(): MMU EVTQ: 0x10 (Translation fault) SSID: 0 SID: 3, e[2] 00000000, e[3] 00000208, in addr: 0x84803010, fetch addr: 0x0
[    2.704282] intel_vpu 0000:00:10.0: [drm] ivpu_hw_37xx_power_down(): VPU not idle during power down
[    2.704768] intel_vpu: probe of 0000:00:10.0 failed with error -110
 
00:0b.0 Processing accelerators: Intel Corporation Arrow Lake NPU (rev 01)
Looks good, this has to be the correct device.

What are these?:
Code:
00:04.0 Signal processing controller: Intel Corporation Device ad03 (rev 01)
00:0a.0 Signal processing controller: Intel Corporation Device ad0d (rev 01)

6. Still seeing the error on loading the driver:
These errors are similar if you passthrough a GPU without its audio device.

For example my NVIDIA, it needs both devices to work regardless if you need the audio part:
Code:
01:00.0 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)

Bildschirmfoto zu 2025-02-19 19-31-23.png
= 01:00.0 + 01:00.1

So just a guess, but maybe you need these together:
Code:
00:0a.0 Signal processing controller: Intel Corporation Device ad0d (rev 01)
00:0b.0 Processing accelerators: Intel Corporation Arrow Lake NPU (rev 01)

Or just 00:0b.0 + "All functions" deactivated.
 
Last edited:
OK So it's still not working but I have ...

[Truncated]

I am having pretty much the exact same issue. Only difference is I can get it working on the host, but receiving the same "Failed to boot the firmware: -110" inside the guest VM (Ubuntu 24). I've tried seemingly everything and hoping someone else might have figured this out.

ASUS NUC 14 Pro AI (Core Ultra 7, Meteor Lake)

Host:


Bash:
root@proxmox:~# uname -a
Linux proxmox 6.8.12-8-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-8 (2025-01-24T12:32Z) x86_64 GNU/Linux

Added options to the vfio.conf (8086:7d1d is the vendor:device ID found from the recognized NPU)

Bash:
root@proxmox:~# cat /etc/modprobe.d/vfio.conf
options vfio-pci ids=8086:7d1d

Added vfio to modules file

Bash:
root@proxmox:~# cat /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Added intel_iommu=on and tried both with and without iommu=pt to /etc/default/grub

Bash:
root@proxmox:~# cat /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
(truncated)

When I detach the PCI device from the VM and reboot, /dev/accel/accel0 is visible to the Proxmox host.

Guest (Ubuntu 24)

Tried kernels 6.11 and 6.13

Bash:
root:~ $ dmesg | grep vpu
[    2.635526] intel_vpu 0000:06:10.0: [drm] Firmware: intel/vpu/vpu_37xx_v0.0.bin, version: 20250115*MTL_CLIENT_SILICON-release*1905*ci_tag_ud202504_vpu_rc_20250115_1905*ae83b65d01c
[    2.635528] intel_vpu 0000:06:10.0: [drm] Scheduler mode: OS
[    3.794539] intel_vpu 0000:06:10.0: [drm] *ERROR* ivpu_boot(): Failed to boot the firmware: -110
[    3.794729] intel_vpu 0000:06:10.0: [drm] *ERROR* ivpu_mmu_dump_event(): MMU EVTQ: 0x10 (Translation fault) SSID: 0 SID: 3, e[2] 00000000, e[3] 00000208, in addr: 0x84803000, fetch addr: 0x0
[    3.795072] intel_vpu 0000:06:10.0: [drm] *ERROR* ivpu_mmu_dump_event(): MMU EVTQ: 0x10 (Translation fault) SSID: 0 SID: 3, e[2] 00000000, e[3] 00000208, in addr: 0x84803010, fetch addr: 0x0
[    3.868365] intel_vpu 0000:06:10.0: [drm] ivpu_hw_power_down(): NPU not idle during power down
[    3.880927] intel_vpu 0000:06:10.0: probe with driver intel_vpu failed with error -110

Bash:
root:~ $ lspci -nnv | grep -i intel

00:00.0 Host bridge [0600]: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller [8086:29c0]
00:1a.0 USB controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 [8086:2937] (rev 03) (prog-if 00 [UHCI])
00:1a.1 USB controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 [8086:2938] (rev 03) (prog-if 00 [UHCI])
00:1a.2 USB controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 [8086:2939] (rev 03) (prog-if 00 [UHCI])
00:1a.7 USB controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 [8086:293c] (rev 03) (prog-if 20 [EHCI])
00:1b.0 Audio device [0403]: Intel Corporation 82801I (ICH9 Family) HD Audio Controller [8086:293e] (rev 03)
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel
00:1d.0 USB controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 [8086:2934] (rev 03) (prog-if 00 [UHCI])
00:1d.1 USB controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 [8086:2935] (rev 03) (prog-if 00 [UHCI])
00:1d.2 USB controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 [8086:2936] (rev 03) (prog-if 00 [UHCI])
00:1d.7 USB controller [0c03]: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 [8086:293a] (rev 03) (prog-if 20 [EHCI])
00:1e.0 PCI bridge [0604]: Intel Corporation 82801 PCI Bridge [8086:244e] (rev 92) (prog-if 01 [Subtractive decode])
00:1f.0 ISA bridge [0601]: Intel Corporation 82801IB (ICH9) LPC Interface Controller [8086:2918] (rev 02)
00:1f.2 SATA controller [0106]: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] [8086:2922] (rev 02) (prog-if 01 [AHCI 1.0])
00:1f.3 SMBus [0c05]: Intel Corporation 82801I (ICH9 Family) SMBus Controller [8086:2930] (rev 02)
06:10.0 Processing accelerators [1200]: Intel Corporation Meteor Lake NPU [8086:7d1d] (rev 04)
        Kernel modules: intel_vpu

Note that last line; NPU is passing through to the VM.

So just a guess, but maybe you need these together:
Code:
00:0a.0 Signal processing controller: Intel Corporation Device ad0d (rev 01)
00:0b.0 Processing accelerators: Intel Corporation Arrow Lake NPU (rev 01)

Or just 00:0b.0 + "All functions" deactivated.

I realize I'm on Meteor Lake, but I tried this as well and no luck.

This feels like a driver bug to me but it's possible I'm doing something wrong. Any assistance would be much appreciated - thanks in advance.