Hey folks,
I have a system running on an Asrock X399 Fatality + Threadripper 1950x. It’s been working totally fine and I have no issues with it, knock on wood.
However, a couple of days ago I purchased an Asus M.2 Hyper card to give soem flash to a TrueNAS Scale VM on this host , to create a pool of 4 flash drives in RaidZ1 (drives are Samsung 990 Evo Plus, 2TBs).
I configured everything in BIOS, PCI-E Bifurcation for that first x16 lane to 4x4, NVMe RAID to Off…and I think that’s it. The drives show correctly both in the BIOS as well as on the host.
Now this is the point where I started having issues. I passed trough all 4 drives, and first time i plugged everything in and booted TrueNAS, only 3 out of the 4 drives were shown as available to be used.
Thinking this was a sporadic issue, I tried rebooting the entire host and to my surprise, now only 1 of the drives did shows up. :\
I started tinkering a bit with it (upgrading all packages, trying the 6.11 kernel) and it seems everything should connect correctly, but sadly it does not. I am at the point where none of the drives are recognized by the TrueNAS UI, nor by the OS itself. It seems the nvme driver is just not picking up the drives.
I only posted the output of 2 drives, but all 4 are there.
It's just like the kernel on the VMs is not able to pick up those 2 drives and use nvme for them.
Ah to mention, in order to exclude TrueNAS being the culprit, I also tested with a clean Ubuntu 24 VM and the results are the same, the drives never make it to
While I was playing with blacklisting drives though, I managed to exclude everything and get the host to use them and all 4 of them seemed to be correctly mapped on the host, allowing me to write to them.
Any help with this would be appreciated, I am really clueless on where to go next as this seems to really be a passtrough / misconfiguration issue.
I have a system running on an Asrock X399 Fatality + Threadripper 1950x. It’s been working totally fine and I have no issues with it, knock on wood.
However, a couple of days ago I purchased an Asus M.2 Hyper card to give soem flash to a TrueNAS Scale VM on this host , to create a pool of 4 flash drives in RaidZ1 (drives are Samsung 990 Evo Plus, 2TBs).
I configured everything in BIOS, PCI-E Bifurcation for that first x16 lane to 4x4, NVMe RAID to Off…and I think that’s it. The drives show correctly both in the BIOS as well as on the host.
Now this is the point where I started having issues. I passed trough all 4 drives, and first time i plugged everything in and booted TrueNAS, only 3 out of the 4 drives were shown as available to be used.
Thinking this was a sporadic issue, I tried rebooting the entire host and to my surprise, now only 1 of the drives did shows up. :\
I started tinkering a bit with it (upgrading all packages, trying the 6.11 kernel) and it seems everything should connect correctly, but sadly it does not. I am at the point where none of the drives are recognized by the TrueNAS UI, nor by the OS itself. It seems the nvme driver is just not picking up the drives.
lspci -knn
in TrueNAS gives the below:
Code:
03:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller PM9C1a [144d:a80d]
Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller PM9C1a [144d:a801]
Kernel modules: nvme
04:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller PM9C1a [144d:a80d]
Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller PM9C1a [144d:a801]
Kernel modules: nvme
lspci -knn
on the host, gives back:
Code:
45:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller PM9C1a (DRAM-less) [144d:a80d]
Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller PM9C1a (DRAM-less) [144d:a801]
Kernel driver in use: vfio-pci
Kernel modules: nvme
46:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller PM9C1a (DRAM-less) [144d:a80d]
Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller PM9C1a (DRAM-less) [144d:a801]
Kernel driver in use: vfio-pci
Kernel modules: nvme
I only posted the output of 2 drives, but all 4 are there.
It's just like the kernel on the VMs is not able to pick up those 2 drives and use nvme for them.
Ah to mention, in order to exclude TrueNAS being the culprit, I also tested with a clean Ubuntu 24 VM and the results are the same, the drives never make it to
lsblk
.While I was playing with blacklisting drives though, I managed to exclude everything and get the host to use them and all 4 of them seemed to be correctly mapped on the host, allowing me to write to them.
Any help with this would be appreciated, I am really clueless on where to go next as this seems to really be a passtrough / misconfiguration issue.
Last edited: