TrueNAS Scale VM not booting messaging "Exited with Code 1" after I passthoughed a SATA Controller

pedropithan

New Member
May 3, 2025
4
0
1
So the way I virtualize TrueNAS Scale is to passtrough my SATA controllers and have my SSDs show up in the system, I do the same with my 10GbE Mellanox ConnectX-3 card, but now for some reason when I add the second SATA controller which contains 2 SSDs of RaidZ1 pool of 4 SSDs, it doesn't boot. When I start the system without the second SATA controller 2 of them still show up. I plugged in and out cables making sure it wasn't a connection issue, so what could it be?

Is there a way I can add the SSDs by adding a Hard Drive device to my VM? Because IDK why they do not show up on the PVE Storage section, is it a mounting issue?

Edit: so I used the command lsblk -o +FSTYPE, UUID to check if PVE was seeing my disks and it turns out it doesn't see anything but the boot drive (I had a pool of 4 2TB SSDs in RaidZ1 and a Mirror Pool of 2 1TB SSDs

Code:
root@hydrogen:~# lsblk -o +FSTYPE,UUID
NAME                          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS FSTYPE      UUID
nvme0n1                       259:0    0 931.5G  0 disk
├─nvme0n1p1                   259:1    0  1007K  0 part
├─nvme0n1p2                   259:2    0     1G  0 part             vfat        EA82-307D
└─nvme0n1p3                   259:3    0 930.5G  0 part             LVM2_member mR9rAF-mHKg-BTNw-w09S-peoF-8MWc-yN6284
  ├─pve-swap                  252:0    0     8G  0 lvm  [SWAP]      swap        cafa0274-c21f-46d9-880b-9d49071a9d50
  ├─pve-root                  252:1    0    96G  0 lvm  /           ext4        12ceb1a3-be0b-439f-94de-7316008c5289
  ├─pve-data_tmeta            252:2    0   8.1G  0 lvm
  │ └─pve-data-tpool          252:4    0 794.3G  0 lvm
  │   ├─pve-data              252:5    0 794.3G  1 lvm
  │   ├─pve-vm--1337--disk--0 252:6    0    50G  0 lvm
  │   ├─pve-vm--2006--disk--0 252:7    0    40G  0 lvm
  │   ├─pve-vm--206--disk--0  252:8    0    60G  0 lvm
  │   ├─pve-vm--637--disk--0  252:9    0   200G  0 lvm
  │   ├─pve-vm--102--disk--0  252:10   0    50G  0 lvm
  │   ├─pve-vm--1994--disk--0 252:11   0   400G  0 lvm
  │   └─pve-vm--333--disk--0  252:12   0   100G  0 lvm
  └─pve-data_tdata            252:3    0 794.3G  0 lvm
    └─pve-data-tpool          252:4    0 794.3G  0 lvm
      ├─pve-data              252:5    0 794.3G  1 lvm
      ├─pve-vm--1337--disk--0 252:6    0    50G  0 lvm
      ├─pve-vm--2006--disk--0 252:7    0    40G  0 lvm
      ├─pve-vm--206--disk--0  252:8    0    60G  0 lvm
      ├─pve-vm--637--disk--0  252:9    0   200G  0 lvm
      ├─pve-vm--102--disk--0  252:10   0    50G  0 lvm
      ├─pve-vm--1994--disk--0 252:11   0   400G  0 lvm
      └─pve-vm--333--disk--0  252:12   0   100G  0 lvm
 
Last edited:
So I found out what is happening but I don't know how to solve it, when enter:

Code:
root@hydrogen:~# dmesg -w

this eventually shows up

[  153.562984] genirq: Flags mismatch irq 40. 00200080 (vfio-intx(0000:09:00.0)) vs. 00200000 (vfio-intx(0000:04:00.0))

so it's a irq mismatch, likely a driver problem, both are using vfio-pci, what should I do?