Proxmox VE 9 PCIe Passthrough TrueNAS VM

Bribbo38

New Member
Aug 26, 2025
10
1
3
Hello, first time on the forum so sorry if I do something wrong.

My problem is that after updating my server from version 8 to 9 I started having problems with my TrueNAS VM.
The configuration is pretty simple, I just have one of the SATA controller on my board passthrough to the VM.
Basically I have a 6-port Intel controller with 4 HHD and 2 SSD and another ASMedia controller with the OS SSD.

My problem is that the VM can see the Intel controller but I don't see any disk attached to it.

Here are more info (if you need more let me know):

Bash:
*-sata
       description: SATA controller
       product: ASM1061/ASM1062 Serial ATA Controller
       vendor: ASMedia Technology Inc.
       physical id: 0
       bus info: pci@0000:06:00.0
       logical name: scsi6
       version: 01
       width: 32 bits
       clock: 33MHz
       capabilities: sata msi pm pciexpress ahci_1.0 bus_master cap_list emulated
       configuration: driver=ahci latency=0
       resources: irq:33 ioport:d050(size=8) ioport:d040(size=4) ioport:d030(size=8) ioport:d020(size=4) ioport:d000(size=32) memory:f7800000-f78001ff
  *-sata
       description: SATA controller
       product: 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode]
       vendor: Intel Corporation
       physical id: 1f.2
       bus info: pci@0000:00:1f.2
       version: 05
       width: 32 bits
       clock: 66MHz
       capabilities: sata msi pm ahci_1.0 cap_list
       configuration: driver=vfio-pci latency=0
       resources: irq:19 ioport:f0b0(size=8) ioport:f0a0(size=4) ioport:f090(size=8) ioport:f080(size=4) ioport:f040(size=32) memory:f7c3a000-f7c3a7ff

The VM configuration is the following:

Bash:
agent: 1
boot: order=scsi0
cores: 3
cpu: x86-64-v2-AES
hostpci0: 0000:00:1f.2
memory: 16384
meta: creation-qemu=9.2.0,ctime=1743960628
name: truenas
net0: virtio=BC:24:11:7D:2F:1B,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-1000-disk-0,iothread=1,size=32G
scsi1: local-lvm:vm-1000-disk-1,iothread=1,size=1G
scsi2: local-lvm:vm-1000-disk-2,iothread=1,size=1G
scsihw: virtio-scsi-single
smbios1: uuid=bfd2c59b-a8fb-4b76-9bf6-63e60145fc37
sockets: 1
startup: order=1
tags: vm
vmgenid: d58adfae-aaab-4e77-a9e5-552a651b0c0a
 
I'd take a look at these
Code:
lspci -nnk
lsblk -o+FSTYPE,MODEL,TRAN,VENDOR
 
I'd take a look at these
Code:
lspci -nnk
lsblk -o+FSTYPE,MODEL,TRAN,VENDOR



For the lscpi -nnk (only the SATA controller)

Bash:
00:1f.2 SATA controller [0106]: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] [8086:8c02] (rev 05)
        Subsystem: Super Micro Computer Inc Device [15d9:0805]
        Kernel driver in use: vfio-pci
        Kernel modules: ahci
06:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1061/ASM1062 Serial ATA Controller [1b21:0612] (rev 01)
        Subsystem: Super Micro Computer Inc Device [15d9:0805]
        Kernel driver in use: ahci
        Kernel modules: ahci

And for lsblk -o+FSTYPE,MODEL,TRAN,VENDOR

Bash:
NAME                          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS FSTYPE      MODEL                   TRAN   VENDOR
sdg                             8:96   0 953.9G  0 disk                         Samsung SSD 860 PRO 1TB sata   ATA
├─sdg1                          8:97   0  1007K  0 part
├─sdg2                          8:98   0     1G  0 part /boot/efi   vfat
└─sdg3                          8:99   0 952.9G  0 part             LVM2_member
  ├─pve-swap                  252:0    0     8G  0 lvm  [SWAP]      swap
  ├─pve-root                  252:1    0    96G  0 lvm  /           ext4
  ├─pve-data_tmeta            252:2    0   8.3G  0 lvm
  │ └─pve-data-tpool          252:4    0 816.2G  0 lvm
  │   ├─pve-data              252:5    0 816.2G  1 lvm
  │   ├─pve-vm--1000--disk--0 252:6    0    32G  0 lvm              zfs_member
  │   ├─pve-vm--1000--disk--1 252:7    0     1G  0 lvm
  │   └─pve-vm--1000--disk--2 252:8    0     1G  0 lvm
  └─pve-data_tdata            252:3    0 816.2G  0 lvm
    └─pve-data-tpool          252:4    0 816.2G  0 lvm
      ├─pve-data              252:5    0 816.2G  1 lvm
      ├─pve-vm--1000--disk--0 252:6    0    32G  0 lvm              zfs_member
      ├─pve-vm--1000--disk--1 252:7    0     1G  0 lvm
      └─pve-vm--1000--disk--2 252:8    0     1G  0 lvm

I can only see my boot drive, the others are not shown, even with the VM off.
 
Last edited:
I'd also check this inside the VM but apparently there's known issues with newer kernels so I'd look into these
 
I'd also check this inside the VM but apparently there's known issues with newer kernels so I'd look into these

The thread's you linked are unfortunally different, my VM doesn't hang it works fine and even sees the SATA controller but not the disks.



I ran the command also on the VM and here are the results:

lspci -nnk
Bash:
00:10.0 SATA controller [0106]: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] [8086:8c02] (rev 05)
        Subsystem: Super Micro Computer Inc 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] [15d9:0805]
        Kernel driver in use: ahci
        Kernel modules: ahci

lsblk -o+FSTYPE,MODEL,TRAN,VENDOR
Bash:
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS FSTYPE     MODEL         TRAN VENDOR
sda      8:0    0   32G  0 disk                        QEMU HARDDISK      QEMU
├─sda1   8:1    0    1M  0 part
├─sda2   8:2    0  512M  0 part             vfat
└─sda3   8:3    0 31.5G  0 part             zfs_member
sdb      8:16   0    1G  0 disk                        QEMU HARDDISK      QEMU
└─sdb1   8:17   0 1013M  0 part             zfs_member
sdc      8:32   0    1G  0 disk                        QEMU HARDDISK      QEMU
└─sdc1   8:33   0 1013M  0 part             zfs_member
 
I'm just thinking that passthrough isn't to be relied on right now. You can check the VM's logs with something like this
Bash:
journalctl -rg "00:10|sata"
Maybe something failed.
 
Last edited:
I'm just thinking that passthrough isn't to be relied on right now. You can check the VM's logs with something like this
Bash:
journalctl -rg "00:10|sata"
Maybe something failed.

In the VM I can see that the SATA controller get's initialize with no problems and it also shows some ports are UP but no actual disk get's attached.

(little example)
Bash:
-- Boot 022f7708da7c497dbafe8c47a3136c2a --
Aug 18 09:37:50 truenas kernel: ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
Aug 18 09:37:50 truenas kernel: ata7: SATA link down (SStatus 0 SControl 300)
Aug 18 09:37:50 truenas kernel: ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
Aug 18 09:37:50 truenas kernel: ata8: SATA link down (SStatus 0 SControl 300)
Aug 18 09:37:50 truenas kernel: ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
Aug 18 09:37:50 truenas kernel: ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
Aug 18 09:37:50 truenas kernel: ata8: SATA max UDMA/133 abar m2048@0xfea53000 port 0xfea53380 irq 42 lpm-pol 0
Aug 18 09:37:50 truenas kernel: ata7: SATA max UDMA/133 abar m2048@0xfea53000 port 0xfea53300 irq 42 lpm-pol 0
Aug 18 09:37:50 truenas kernel: ata6: SATA max UDMA/133 abar m2048@0xfea53000 port 0xfea53280 irq 42 lpm-pol 0
Aug 18 09:37:50 truenas kernel: ata5: SATA max UDMA/133 abar m2048@0xfea53000 port 0xfea53200 irq 42 lpm-pol 0
Aug 18 09:37:50 truenas kernel: ata4: SATA max UDMA/133 abar m2048@0xfea53000 port 0xfea53180 irq 42 lpm-pol 0
Aug 18 09:37:50 truenas kernel: ata3: SATA max UDMA/133 abar m2048@0xfea53000 port 0xfea53100 irq 42 lpm-pol 0
Aug 18 09:37:50 truenas kernel: ahci 0000:00:10.0: flags: 64bit ncq pm led clo pio slum part ems apst
Aug 18 09:37:50 truenas kernel: ahci 0000:00:10.0: 6/6 ports implemented (port mask 0x3f)
Aug 18 09:37:50 truenas kernel: ahci 0000:00:10.0: AHCI vers 0001.0300, 32 command slots, 6 Gbps, SATA mode
Aug 18 09:37:50 truenas kernel: ahci 0000:00:10.0: version 3.0
Aug 18 09:37:50 truenas kernel: pci 0000:00:10.0: BAR 5 [mem 0xfea53000-0xfea537ff]
Aug 18 09:37:50 truenas kernel: pci 0000:00:10.0: BAR 4 [io  0xf0a0-0xf0bf]
Aug 18 09:37:50 truenas kernel: pci 0000:00:10.0: BAR 3 [io  0xf104-0xf107]
Aug 18 09:37:50 truenas kernel: pci 0000:00:10.0: BAR 2 [io  0xf0f8-0xf0ff]
Aug 18 09:37:50 truenas kernel: pci 0000:00:10.0: BAR 1 [io  0xf100-0xf103]
Aug 18 09:37:50 truenas kernel: pci 0000:00:10.0: BAR 0 [io  0xf0f0-0xf0f7]
Aug 18 09:37:50 truenas kernel: pci 0000:00:10.0: [8086:8c02] type 00 class 0x010601 conventional PCI endpoint
 
I looked more and I think the best option right now is to downgrade the kernel (if possible) and hope it will work.
But I'm not sure how to do it without causing more problems or broking everything
 
I can only see my boot drive, the others are not shown, even with the VM off.
If you are using the vfio driver, the host will not recognize the drive.

Please stop autostarting the virtual machine and restart it.

If you are using early binding or similar, you will need to disable it.
 
If you are using the vfio driver, the host will not recognize the drive.

Please stop autostarting the virtual machine and restart it.

If you are using early binding or similar, you will need to disable it.

My problem is not that the host doesn't see the drives, the problem is the VM that sees the SATA Controller but can't mount the drives.

And on the host I get this errors when the VM tries:
Bash:
root@pve-beast:~# dmesg | grep -i dmar
[    0.008271] ACPI: DMAR 0x00000000D996B1F0 0000B8 (v01 INTEL  BDW      00000001 INTL 00000001)
[    0.008296] ACPI: Reserving DMAR table memory at [mem 0xd996b1f0-0xd996b2a7]
[    0.053194] DMAR: IOMMU enabled
[    0.167605] DMAR: Host address width 39
[    0.167606] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.167614] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap c0000020660462 ecap f0101a
[    0.167616] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.167619] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c20660462 ecap f010da
[    0.167621] DMAR: RMRR base: 0x000000d9e62000 end: 0x000000d9e70fff
[    0.167623] DMAR: RMRR base: 0x000000dc000000 end: 0x000000de1fffff
[    0.167625] DMAR-IR: IOAPIC id 8 under DRHD base  0xfed91000 IOMMU 1
[    0.167627] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.167628] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit.
[    0.167628] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting.
[    0.168195] DMAR-IR: Enabled IRQ remapping in xapic mode
[    0.317632] DMAR: No ATSR found
[    0.317633] DMAR: No SATC found
[    0.317634] DMAR: dmar0: Using Queued invalidation
[    0.317640] DMAR: dmar1: Using Queued invalidation
[    0.318398] DMAR: Intel(R) Virtualization Technology for Directed I/O
[   20.431945] DMAR: DRHD: handling fault status reg 2
[   20.431950] DMAR: [DMA Read NO_PASID] Request device [00:1f.2] fault addr 0x7efca000 [fault reason 0x0c] non-zero reserved fields in PTE
[   20.432025] DMAR: DRHD: handling fault status reg 2
[   20.432028] DMAR: [DMA Read NO_PASID] Request device [00:1f.2] fault addr 0x7efc5000 [fault reason 0x0c] non-zero reserved fields in PTE
[   20.432097] DMAR: DRHD: handling fault status reg 2
[   20.432100] DMAR: [DMA Read NO_PASID] Request device [00:1f.2] fault addr 0x7efc5000 [fault reason 0x0c] non-zero reserved fields in PTE
[   20.432207] DMAR: DRHD: handling fault status reg 2
[  101.768682] dmar_fault: 8 callbacks suppressed
[  101.768686] DMAR: DRHD: handling fault status reg 3
[  101.768689] DMAR: [DMA Write NO_PASID] Request device [00:1f.2] fault addr 0x1002c0000 [fault reason 0x0c] non-zero reserved fields in PTE
[  101.768744] DMAR: DRHD: handling fault status reg 2
[  101.768746] DMAR: [DMA Write NO_PASID] Request device [00:1f.2] fault addr 0x100340000 [fault reason 0x0c] non-zero reserved fields in PTE
[  101.768845] DMAR: DRHD: handling fault status reg 2
[  101.768847] DMAR: [DMA Write NO_PASID] Request device [00:1f.2] fault addr 0x100360000 [fault reason 0x0c] non-zero reserved fields in PTE
[  102.079417] DMAR: DRHD: handling fault status reg 2
[  107.266835] dmar_fault: 14 callbacks suppressed
[  107.266840] DMAR: DRHD: handling fault status reg 2
[  107.266843] DMAR: [DMA Write NO_PASID] Request device [00:1f.2] fault addr 0x100340000 [fault reason 0x0c] non-zero reserved fields in PTE
[  112.386836] DMAR: DRHD: handling fault status reg 2
[  112.386842] DMAR: [DMA Write NO_PASID] Request device [00:1f.2] fault addr 0x1002e0000 [fault reason 0x0c] non-zero reserved fields in PTE
[  117.819295] DMAR: DRHD: handling fault status reg 2
[  117.819308] DMAR: [DMA Write NO_PASID] Request device [00:1f.2] fault addr 0x100300000 [fault reason 0x0c] non-zero reserved fields in PTE
[  117.819325] DMAR: DRHD: handling fault status reg 3
[  117.819328] DMAR: [DMA Read NO_PASID] Request device [00:1f.2] fault addr 0x100300000 [fault reason 0x0c] non-zero reserved fields in PTE
[  117.819348] DMAR: DRHD: handling fault status reg 3
[  117.819350] DMAR: [DMA Read NO_PASID] Request device [00:1f.2] fault addr 0x100340000 [fault reason 0x0c] non-zero reserved fields in PTE
[  117.819363] DMAR: DRHD: handling fault status reg 2
[  123.194344] dmar_fault: 11 callbacks suppressed
[  123.194349] DMAR: DRHD: handling fault status reg 2
[  123.194352] DMAR: [DMA Write NO_PASID] Request device [00:1f.2] fault addr 0x1002c0000 [fault reason 0x0c] non-zero reserved fields in PTE
[  123.194370] DMAR: DRHD: handling fault status reg 2
[  123.194372] DMAR: [DMA Read NO_PASID] Request device [00:1f.2] fault addr 0x1002c0000 [fault reason 0x0c] non-zero reserved fields in PTE
[  123.194413] DMAR: DRHD: handling fault status reg 2
[  123.194416] DMAR: [DMA Write NO_PASID] Request device [00:1f.2] fault addr 0x100320000 [fault reason 0x0c] non-zero reserved fields in PTE
[  123.194433] DMAR: DRHD: handling fault status reg 2
[  128.258810] dmar_fault: 2 callbacks suppressed
[  128.258814] DMAR: DRHD: handling fault status reg 2
[  128.258817] DMAR: [DMA Write NO_PASID] Request device [00:1f.2] fault addr 0x100340000 [fault reason 0x0c] non-zero reserved fields in PTE
[  133.378815] DMAR: DRHD: handling fault status reg 2
[  133.378821] DMAR: [DMA Write NO_PASID] Request device [00:1f.2] fault addr 0x1002e0000 [fault reason 0x0c] non-zero reserved fields in PTE
[  133.378893] DMAR: DRHD: handling fault status reg 2
[  133.378896] DMAR: [DMA Write NO_PASID] Request device [00:1f.2] fault addr 0x100300000 [fault reason 0x0c] non-zero reserved fields in PTE
[  133.378908] DMAR: DRHD: handling fault status reg 2
[  133.378910] DMAR: [DMA Read NO_PASID] Request device [00:1f.2] fault addr 0x100300000 [fault reason 0x0c] non-zero reserved fields in PTE
[  138.498897] DMAR: DRHD: handling fault status reg 2
[  138.498902] DMAR: [DMA Write NO_PASID] Request device [00:1f.2] fault addr 0x100360000 [fault reason 0x0c] non-zero reserved fields in PTE
[  143.722968] DMAR: DRHD: handling fault status reg 2
[  143.722975] DMAR: [DMA Write NO_PASID] Request device [00:1f.2] fault addr 0x1002c0000 [fault reason 0x0c] non-zero reserved fields in PTE
[  143.931421] DMAR: DRHD: handling fault status reg 2
[  143.931434] DMAR: [DMA Write NO_PASID] Request device [00:1f.2] fault addr 0x100340000 [fault reason 0x0c] non-zero reserved fields in PTE
[  143.931448] DMAR: DRHD: handling fault status reg 2
[  143.931451] DMAR: [DMA Write NO_PASID] Request device [00:1f.2] fault addr 0x1002e0000 [fault reason 0x0c] non-zero reserved fields in PTE
[  143.931472] DMAR: DRHD: handling fault status reg 2

00:1f.2 being my SATA Controller passed to the VM.
 
For added information till the new Kernel has a workaround, get's updated or something I'm gonna use the old one (6.8.12-13-pve).
That I pinned with the proxmox-boot-tool command.
 
  • Like
Reactions: FuzzyLobster
As I experienced the same issue after updating to Proxmox 9 with the Linux Kernel 6.14, I played a little bit around.
It seems that the ROM-Bar setting in the PCIe settings is the issue:

View attachment 90043

After I disabled it, my TrueNAS was able to start without any issue and was able to claim the ASMedia SATA controller of my NAS.

Thank you for the help, but unfortunally my VM had the same errors when trying to mount the disks.
I don't know if the controller being an Intel C220 SATA makes it behave in some other way or if it's just not well supported. On top of that I read that being the integrated controller of the motherboard it can also add problems.

Unfortunately there is no point in trying with the other ASMedia ASM1061/ASM1062 SATA controller since it has only 2 ports and I need a minimum of 4.

But thank you again for the help.
 
Thank you for the help, but unfortunally my VM had the same errors when trying to mount the disks.
I don't know if the controller being an Intel C220 SATA makes it behave in some other way or if it's just not well supported. On top of that I read that being the integrated controller of the motherboard it can also add problems.

Unfortunately there is no point in trying with the other ASMedia ASM1061/ASM1062 SATA controller since it has only 2 ports and I need a minimum of 4.

But thank you again for the help.
ASMedia ASM1166 has 6 ports.
 
As I experienced the same issue after updating to Proxmox 9 with the Linux Kernel 6.14, I played a little bit around.
It seems that the ROM-Bar setting in the PCIe settings is the issue:

View attachment 90043

After I disabled it, my TrueNAS was able to start without any issue and was able to claim the ASMedia SATA controller of my NAS.
Thanks this fixed the issue where my TrueNAS VM would not start anymore with ASM1166 PCIe passthrough configured for Linux Kernel 6.14.

I had the issue on both Proxmox 9 + Kernel 6.14 and Proxmox 8 + Kernel 6.14.

One more thing to note though: I had to set the QEMU version of the TrueNAS VM to 9.1 aswell since I had the "aw bits" error after disabling "ROM bar" feature.

VMs work fine with this setup on Proxmox 9 Linux Kernel 6.14 but somehow my system now only reaches C8 power state (I was reaching C10 on Proxmox 8 + kernel 6.8).

I hope I can reach C10 again after they updated kernel / QEMU so I can use ROM-bar and latest QEMU version again.


//EDIT: I booted Proxmox 9 + Linux Kernel 6.8 so I can test TrueNAS VM with ASM1166 PCIe passthrough while having ROM-Bar enabled.
This did not fix the C-States of my system, so it seems to be an issue with Proxmox 9 in general...
 
Last edited:
As I experienced the same issue after updating to Proxmox 9 with the Linux Kernel 6.14, I played a little bit around.
It seems that the ROM-Bar setting in the PCIe settings is the issue:

View attachment 90043

After I disabled it, my TrueNAS was able to start without any issue and was able to claim the ASMedia SATA controller of my NAS.
Thank you for posting this... we had a couple other threads going and I couldn't figure out why i couldn't pass the card thru. After disabling rombar it now works!
 
  • Like
Reactions: mubs
I ran into this today after updating. Found this thread about ROM-Bar and had high hopes, but it didn't work.

After typing up a plea for help in this thread, but before hitting post I thought maybe I should just double check everything so people don't have to go through a bunch of initial condition questions with me. I started going through all the basics for passthrough again which I hadn't done in a long time.

TLDR, It turns out my grub file with my iommu command line settings was renamed to grub.ucf-dist. So none of the settings were loaded. I'm not sure when that happened or how I was booted and running successfully in that state, but after putting it back everything is running fine again.

Phew! :)
 
I ran into this today after updating. Found this thread about ROM-Bar and had high hopes, but it didn't work.

After typing up a plea for help in this thread, but before hitting post I thought maybe I should just double check everything so people don't have to go through a bunch of initial condition questions with me. I started going through all the basics for passthrough again which I hadn't done in a long time.

TLDR, It turns out my grub file with my iommu command line settings was renamed to grub.ucf-dist. So none of the settings were loaded. I'm not sure when that happened or how I was booted and running successfully in that state, but after putting it back everything is running fine again.

Phew! :)




Thank you for the info, I checked by /etc/default/grub file and I have the following: intel_iommu=on iommu=pt ahci.mobile_lpm_policy=1 so it should be set up correctly. I also saw the grub.ucf-dist and it has this: intel_iommu=on ahci.mobile_lpm_policy=1 as well.

Unfortunately even with the settings in the grub and with pcie=1,rombar=0 the TrueNAS VM still can't attach the disks from the SATA Controller.
 
On my motherboard I also have to use the downstream patch otherwise the sata controller gets lumped with the Ethernet controllers. Could be something else for your specific situation, but maybe it’s something common which should be in the wiki setup steps so check that if you haven’t already.