Broken pci passthough


New Member
Jun 16, 2023
I tried to use 9p for directories passthrough.
I found in examples i need to:
1. Add to /etc/initramfs-tools/modules
2. update-initramfs -u

After this pcie passthrough stopped to work. Started MV has output "no content".

Removing 9P and running "update-initramfs -u" doesn't fix anything

I added to /etc/modules:
And run "update-initramfs -u -k all"
Same result.

"dmesg | grep -e DMAR -e IOMMU -e AMD-Vi" shoes nothing.

"find /sys/kernel/iommu_groups/ -type l" gives:

How i can fix pcie passthough?

Proxmox 8.0.
Last edited:
hi, i cannot really imagine how the 9p modules could impact the pci passthrough ( though i am not saying it cannot happen)

first, 9p is not really a supported setup for us, so we don't test it here and you may have breakage
second, can you post your versions (pveversion -v) the vm config (qm config ID) and the complete output of dmesg ?
hi, i cannot really imagine how the 9p modules could impact the pci passthrough ( though i am not saying it cannot happen)

first, 9p is not really a supported setup for us, so we don't test it here and you may have breakage
second, can you post your versions (pveversion -v) the vm config (qm config ID) and the complete output of dmesg ?
Removing 9p and running "update-initramfs -u" doesn't fix pcie pass through. I really would like to fix the issue and dont really care for 9p use. Just wanted to test it. I will post dmesg output once i get access to pve.
Proxmox 8 installed on Bookworm. Pcie passthgrough worked fine until i run "update-initramfs -u". No other changes were done (Later i tried adding vfio modules and run "update-initramfs -u -k all").

root@pve:~# pveversion -v
proxmox-ve: 8.0.0 (running kernel: 6.2.16-2-pve)
pve-manager: 8.0.0~8 (running version: 8.0.0~8/61cf3e3d9af500c2)
pve-kernel-6.2: 8.0.1
pve-kernel-helper: 7.3-4
pve-kernel-6.2.16-2-pve: 6.2.16-2
pve-kernel-6.2.16-1-pve: 6.2.16-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.0
libpve-access-control: 8.0.1
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.3
libpve-guest-common-perl: 5.0.3
libpve-http-server-perl: 5.0.3
libpve-rs-perl: 0.8.3
libpve-storage-perl: 8.0.0
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 2.99.0-1
proxmox-backup-file-restore: 2.99.0-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.0
proxmox-widget-toolkit: 4.0.4
pve-cluster: 8.0.1
pve-container: 5.0.1
pve-docs: 8.0.1
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.1
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.3
pve-qemu-kvm: 8.0.2-3
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.3
smartmontools: 7.3-1+b1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1

root@pve:~# qm config 102
balloon: 512
boot: order=scsi0;ide2;net0
cores: 1
hostpci0: 0000:00:12.0
ide2: local:iso/proxmox-backup-server_2.4-1.iso,media=cdrom,size=841792K
memory: 2048
meta: creation-qemu=8.0.2,ctime=1686829307
name: delete
net0: virtio=6E:2D:F6:56:DA:1E,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local:102/vm-102-disk-0.qcow2,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=1a73515c-bcf9-4ec1-b92b-01e9f2e6c894
sockets: 1
vmgenid: 6af0338d-24cf-481b-bfe4-f4565d37ae03

root@pve:~# dmesg | grep -e DMAR -e IOMMU
[    0.010522] ACPI: DMAR 0x00000000795D8000 0000A8 (v01 INTEL  GLK-SOC  00000003 BRXT 0100000D)
[    0.010605] ACPI: Reserving DMAR table memory at [mem 0x795d8000-0x795d80a7]
[    0.041090] DMAR: IOMMU enabled
[    0.142551] DMAR: Host address width 39
[    0.142554] DMAR: DRHD base: 0x000000fed64000 flags: 0x0
[    0.142568] DMAR: dmar0: reg_base_addr fed64000 ver 1:0 cap 1c0000c40660462 ecap 9e2ff0505e
[    0.142574] DMAR: DRHD base: 0x000000fed65000 flags: 0x1
[    0.142585] DMAR: dmar1: reg_base_addr fed65000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.142591] DMAR: RMRR base: 0x0000007954e000 end: 0x0000007956dfff
[    0.142596] DMAR: RMRR base: 0x0000007b800000 end: 0x0000007fffffff
[    0.142601] DMAR-IR: IOAPIC id 1 under DRHD base  0xfed65000 IOMMU 1
[    0.142605] DMAR-IR: HPET id 0 under DRHD base 0xfed65000
[    0.142608] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.144749] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.601510] DMAR: No ATSR found
[    0.601512] DMAR: No SATC found
[    0.601515] DMAR: IOMMU feature fl1gp_support inconsistent
[    0.601517] DMAR: IOMMU feature pgsel_inv inconsistent
[    0.601519] DMAR: IOMMU feature nwfs inconsistent
[    0.601522] DMAR: IOMMU feature eafs inconsistent
[    0.601523] DMAR: IOMMU feature prs inconsistent
[    0.601525] DMAR: IOMMU feature nest inconsistent
[    0.601527] DMAR: IOMMU feature mts inconsistent
[    0.601529] DMAR: IOMMU feature sc_support inconsistent
[    0.601530] DMAR: IOMMU feature dev_iotlb_support inconsistent
[    0.601533] DMAR: dmar0: Using Queued invalidation
[    0.601538] DMAR: dmar1: Using Queued invalidation
[    0.604993] DMAR: Intel(R) Virtualization Technology for Directed I/O

complete dmesg
dmesg after test vm stated.
[   14.774575] bpfilter: Loaded bpfilter_umh pid 1481
[   14.775134] Started bpfilter
[  427.771418] ata1.00: disable device
[  427.823260] sd 0:0:0:0: [sda] Synchronizing SCSI cache
[  427.823310] sd 0:0:0:0: [sda] Synchronize Cache(10) failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  427.823313] sd 0:0:0:0: [sda] Stopping disk
[  427.823322] sd 0:0:0:0: [sda] Start/Stop Unit failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK

root@pve:~# cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.

# Generated by sensors-detect on Sat Jun 10 10:58:56 2023
# Chip drivers


root@pve:~# cat /etc/initramfs-tools/modules
# List of modules that you want to include in your initramfs.
# They will be loaded at boot time in the order below.
# Syntax:  module_name [args ...]
# You must run update-initramfs(8) to effect this change.
# Examples:
# raid1
# sd_mod


Test VM is still starting for 6 hours.


Had PVE 7.4 (5.15 kernel) on external disk. Booted it to check if passthrough works (most internetresults are about HW failure, so really wanted to test this). Run "update-initramfs -u". Rebooted, everything is working.

No idea how to fix PVE 8 instllation.
Last edited:


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!