Pass-through - viable for m.2 nvme to sff-8087?

Proymoy

New Member
Jun 28, 2025
4
1
3
Quick quesiton: can I passthrough M.2 NVME to SFF-8087 (3x HDD + 1 or 2x SSD)?

Details: Is following setup viable?
I have B550M DS3H motherboard. It has PCIE x16x16, PCIE x16x4 and M.2 (x4). I want to have GPU in the first PCIE, 4x2.5 Gigs network card in the second (not sure if possible) and I would like to pass 3x HDD + 1x SSD to a virtual machine (I actually might need 2 ssds if I need to install Truenas also on passed-through drive).

Based on the iommu report below, I think I can't pass-through the onboard sata's right? Because they are part of group 16 and there are also USBs? The NVMe though seems to be a sole member of group 15 so I might be able to do that?

Group 0: [1022:1480] 00:00.0 Host bridge Starship/Matisse Root Complex
Group 1: [1022:1482] 00:01.0 Host bridge Starship/Matisse PCIe Dummy Host Bridge
Group 2: [1022:1483] [R] 00:01.1 PCI bridge Starship/Matisse GPP Bridge
Group 3: [1022:1483] [R] 00:01.2 PCI bridge Starship/Matisse GPP Bridge
Group 4: [1022:1482] 00:02.0 Host bridge Starship/Matisse PCIe Dummy Host Bridge
Group 5: [1022:1482] 00:03.0 Host bridge Starship/Matisse PCIe Dummy Host Bridge
Group 6: [1022:1483] [R] 00:03.1 PCI bridge Starship/Matisse GPP Bridge
Group 7: [1022:1482] 00:04.0 Host bridge Starship/Matisse PCIe Dummy Host Bridge
Group 8: [1022:1482] 00:05.0 Host bridge Starship/Matisse PCIe Dummy Host Bridge
Group 9: [1022:1482] 00:07.0 Host bridge Starship/Matisse PCIe Dummy Host Bridge
Group 10: [1022:1484] [R] 00:07.1 PCI bridge Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
Group 11: [1022:1482] 00:08.0 Host bridge Starship/Matisse PCIe Dummy Host Bridge
Group 12: [1022:1484] [R] 00:08.1 PCI bridge Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
Group 13: [1022:790b] 00:14.0 SMBus FCH SMBus Controller
[1022:790e] 00:14.3 ISA bridge FCH LPC Bridge
Group 14: [1022:1440] 00:18.0 Host bridge Matisse/Vermeer Data Fabric: Device 18h; Function 0
[1022:1441] 00:18.1 Host bridge Matisse/Vermeer Data Fabric: Device 18h; Function 1
[1022:1442] 00:18.2 Host bridge Matisse/Vermeer Data Fabric: Device 18h; Function 2
[1022:1443] 00:18.3 Host bridge Matisse/Vermeer Data Fabric: Device 18h; Function 3
[1022:1444] 00:18.4 Host bridge Matisse/Vermeer Data Fabric: Device 18h; Function 4
[1022:1445] 00:18.5 Host bridge Matisse/Vermeer Data Fabric: Device 18h; Function 5
[1022:1446] 00:18.6 Host bridge Matisse/Vermeer Data Fabric: Device 18h; Function 6
[1022:1447] 00:18.7 Host bridge Matisse/Vermeer Data Fabric: Device 18h; Function 7
Group 15: [2646:5013] [R] 01:00.0 Non-Volatile memory controller KC3000/FURY Renegade NVMe SSD [E18]
Group 16: [1022:43ee] [R] 02:00.0 USB controller 500 Series Chipset USB 3.1 XHCI Controller
USB: [1d6b:0002] Bus 001 Device 001 Linux Foundation 2.0 root hub
USB: [046d:c52b] Bus 001 Device 002 Logitech, Inc. Unifying Receiver
USB: [048d:5702] Bus 001 Device 004 Integrated Technology Express, Inc. RGB LED Controller
USB: [18d1:d001] Bus 001 Device 005 Google Inc. Nexus 4 (fastboot)
USB: [046d:0aea] Bus 001 Device 006 True Wireless
USB: [1d6b:0003] Bus 002 Device 001 Linux Foundation 3.0 root hub
USB: [1d6b:0002] Bus 001 Device 001 Linux Foundation 2.0 root hub
USB: [046d:c52b] Bus 001 Device 002 Logitech, Inc. Unifying Receiver
USB: [048d:5702] Bus 001 Device 004 Integrated Technology Express, Inc. RGB LED Controller
USB: [18d1:d001] Bus 001 Device 005 Google Inc. Nexus 4 (fastboot)
USB: [046d:0aea] Bus 001 Device 006 True Wireless
USB: [1d6b:0003] Bus 002 Device 001 Linux Foundation 3.0 root hub
USB: [1d6b:0002] Bus 003 Device 001 Linux Foundation 2.0 root hub
USB: [058f:6254] Bus 003 Device 002 Alcor Micro Corp. USB Hub
USB: [08bb:2704] Bus 003 Device 004 Texas Instruments PCM2704 16-bit stereo audio DAC
USB: [5043:4d6f] Bus 003 Device 054 Mouse
USB: [1d6b:0003] Bus 004 Device 001 Linux Foundation 3.0 root hub
[1022:43eb] 02:00.1 SATA controller 500 Series Chipset SATA Controller
[1022:43e9] 02:00.2 PCI bridge 500 Series Chipset Switch Upstream Port
[1022:43ea] [R] 03:09.0 PCI bridge Device 43ea
[10ec:8168] [R] 04:00.0 Ethernet controller RTL8111/8168/8211/8411 PCI Express Gigabit Ethernet Controller
Group 17: [1002:1478] [R] 05:00.0 PCI bridge Navi 10 XL Upstream Port of PCI Express Switch
Group 18: [1002:1479] [R] 06:00.0 PCI bridge Navi 10 XL Downstream Port of PCI Express Switch
Group 19: [1002:73df] [R] 07:00.0 VGA compatible controller Navi 22 [Radeon RX 6700/6700 XT/6750 XT / 6800M/6850M XT]
Group 20: [1002:ab28] 07:00.1 Audio device Navi 21/23 HDMI/DP Audio Controller
Group 21: [1022:148a] [R] 08:00.0 Non-Essential Instrumentation [1300] Starship/Matisse PCIe Dummy Function
Group 22: [1022:1485] [R] 09:00.0 Non-Essential Instrumentation [1300] Starship/Matisse Reserved SPP
Group 23: [1022:1486] [R] 09:00.1 Encryption controller Starship/Matisse Cryptographic Coprocessor PSPCPP
Group 24: [1022:149c] [R] 09:00.3 USB controller Matisse USB 3.0 Host Controller
USB: [1d6b:0002] Bus 003 Device 001 Linux Foundation 2.0 root hub
USB: [058f:6254] Bus 003 Device 002 Alcor Micro Corp. USB Hub
USB: [08bb:2704] Bus 003 Device 004 Texas Instruments PCM2704 16-bit stereo audio DAC
USB: [5043:4d6f] Bus 003 Device 054 Mouse
USB: [1d6b:0003] Bus 004 Device 001 Linux Foundation 3.0 root hub
USB: [1d6b:0002] Bus 001 Device 001 Linux Foundation 2.0 root hub
USB: [046d:c52b] Bus 001 Device 002 Logitech, Inc. Unifying Receiver
USB: [048d:5702] Bus 001 Device 004 Integrated Technology Express, Inc. RGB LED Controller
USB: [18d1:d001] Bus 001 Device 005 Google Inc. Nexus 4 (fastboot)
USB: [046d:0aea] Bus 001 Device 006 True Wireless
USB: [1d6b:0003] Bus 002 Device 001 Linux Foundation 3.0 root hub
USB: [1d6b:0002] Bus 003 Device 001 Linux Foundation 2.0 root hub
USB: [058f:6254] Bus 003 Device 002 Alcor Micro Corp. USB Hub
USB: [08bb:2704] Bus 003 Device 004 Texas Instruments PCM2704 16-bit stereo audio DAC
USB: [5043:4d6f] Bus 003 Device 054 Mouse
USB: [1d6b:0003] Bus 004 Device 001 Linux Foundation 3.0 root hub
Group 25: [1022:1487] 09:00.4 Audio device Starship/Matisse HD Audio Controller

More unnecessary details: I am trying to figure out if I can create decent proxmox/truenas server by reusing my current pc motherboard and buy new mobo for the pc, or if I need to buy mobo specifically for proxmox/truenas with enough pcie slots.
 
Last edited:
You’re correct. You can easily passthrough your NVMe drive since it’s in its own IOMMU group, but you can’t passthrough your onboard SATA controller individually. If you passthrough the M.2 port to the VM, you should be able to see the SATA controller of the M.2 to SFF-8087 adapter card in TrueNAS.

Regarding your 4x 2.5 Gbps NIC in the second PCIe x16 slot (which runs at PCIe 3.0 x4), PCIe 3.0 x4 provides 32 Gbps unidirectional bandwidth, more than enough to cover the combined 10 Gbps from the four 2.5 Gbps ports. Bandwidth won’t be a bottleneck.
 
  • Like
Reactions: Proymoy
Hi @groque, thank you for your answer, this is good news, and especially big thanks that the answer makes it all perfectly clear.
Do you also know if I should install TrueNAS also on passed through drive, or that should be okay (will I duplicate zfs?).
Thank you
 
Hi @groque, thank you for your answer, this is good news, and especially big thanks that the answer makes it all perfectly clear.
Do you also know if I should install TrueNAS also on passed through drive, or that should be okay (will I duplicate zfs?).
Thank you

Yes, you can install TrueNAS on a passed-through drive or even two choosing to use ZFS mirror during the installation. I tested that setup some time ago as a learning exercise (using two SSDs for the TrueNAS boot pool), and it worked fine.

That said, when virtualizing TrueNAS on Proxmox, I eventually preferred the simpler route: using a virtual disk stored on my Proxmox's local-zfs (already a ZFS mirror). TrueNAS only needs 16 GB for the boot device, so dedicating two physical drives felt like wasted slots.

But both setups work, it just depends on how much hardware you're willing to allocate.
 
  • Like
Reactions: Proymoy
using a virtual disk stored on my Proxmox's local-zfs (already a ZFS mirror). TrueNAS only needs 16 GB for the boot device, so dedicating two physical drives felt like wasted slots.
Thank you. That virtual disk you used only for the mirror if I understand it correctly? So physical disk for the actual TrueNAS installation and a virtual drive as a mirror?
Why I am asking is that I think I can pass through only 4 physical drives (via the m.2 socket) - I have 3 for data, one for apps and I need to figure out if I need fifth passed trough for the system.
 
In my setup, the TrueNAS system (boot) drive was just a single virtual disk, stored on Proxmox’s local-zfs, and not a physical drive. That virtual disk was created in Proxmox and attached to the VM like any regular disk. No physical passthrough for the TrueNAS installation.

Then, I passed through a controller (via M.2) with 2 physical drives, which TrueNAS used purely for data storage (ZFS mirror).

To clarify the full picture:
  • Proxmox host had 2 SSDs in a ZFS mirror which formed the local-zfs pool.
  • TrueNAS VM used a virtual disk from local-zfs as the boot/system disk and had an additional controller with 2 drives via passthrough.
This way, you don’t need to "waste" one of your two slots for the TrueNAS boot drive. You can keep your controller with 3 data drives + 1 apps drive passed through and still boot TrueNAS from a virtual disk.
 
  • Like
Reactions: Proymoy