Proxmox with FreeNAS VM - disk passthrough vs PCI passthrough for SATA controller?

victorhooi

Well-Known Member
Apr 3, 2018
253
20
58
38
I have an existing NAS server running FreeNAS 11.2-U6.

The motherboard is a SuperMicro A2SDi-H-TP4F, and I have:
  • 8 x 12TB HDDs
  • 1 x 2TB HP EX950 M.2 NVMe drive
  • 1 x Intel Optane PCIe SSD
I'm also running Bhyve to host some Ubuntu VMs - however, these have not proven very stable.

Hence, I'm thinking of moving to Proxmox, and then running a FreeNAS VM within that.

I read earlier that PCI passthrough is safer than disk passthrough when using ZFS - is this still the case?

Here is the output of "camcontrol devlist -v" from FreeNAS:
Code:
root@freenas[~]# camcontrol devlist -v
scbus0 on ahcich0 bus 0:
<HUH721212ALE601 LEGL01.0>         at scbus0 target 0 lun 0 (pass0,ada0)
<>                                 at scbus0 target -1 lun ffffffff ()
scbus1 on ahcich1 bus 0:
<HUH721212ALE601 LEGL01.0>         at scbus1 target 0 lun 0 (pass1,ada1)
<>                                 at scbus1 target -1 lun ffffffff ()
scbus2 on ahcich2 bus 0:
<HUH721212ALE601 LEGL01.0>         at scbus2 target 0 lun 0 (pass2,ada2)
<>                                 at scbus2 target -1 lun ffffffff ()
scbus3 on ahcich3 bus 0:
<HUH721212ALE601 LEGL01.0>         at scbus3 target 0 lun 0 (pass3,ada3)
<>                                 at scbus3 target -1 lun ffffffff ()
scbus4 on ahcich4 bus 0:
<HUH721212ALE601 LEGL01.0>         at scbus4 target 0 lun 0 (pass4,ada4)
<>                                 at scbus4 target -1 lun ffffffff ()
scbus5 on ahcich5 bus 0:
<HUH721212ALE601 LEGL01.0>         at scbus5 target 0 lun 0 (pass5,ada5)
<>                                 at scbus5 target -1 lun ffffffff ()
scbus6 on ahcich6 bus 0:
<HUH721212ALE601 LEGL01.0>         at scbus6 target 0 lun 0 (pass6,ada6)
<>                                 at scbus6 target -1 lun ffffffff ()
scbus7 on ahcich7 bus 0:
<HUH721212ALE601 LEGL01.0>         at scbus7 target 0 lun 0 (pass7,ada7)
<>                                 at scbus7 target -1 lun ffffffff ()
scbus8 on ahcich11 bus 0:
<>                                 at scbus8 target -1 lun ffffffff ()
scbus9 on ahcich12 bus 0:
<>                                 at scbus9 target -1 lun ffffffff ()
scbus10 on ahcich13 bus 0:
<>                                 at scbus10 target -1 lun ffffffff ()
scbus11 on ahcich14 bus 0:
<>                                 at scbus11 target -1 lun ffffffff ()
scbus12 on camsim0 bus 0:
<>                                 at scbus12 target -1 lun ffffffff ()
scbus13 on umass-sim0 bus 0:
<SanDisk Ultra Fit 1.00>           at scbus13 target 0 lun 0 (pass8,da0)
scbus-1 on xpt0 bus 0:
<>                                 at scbus-1 target -1 lun ffffffff (xpt0)
And here is the output of "nvmecontrol devlist":
Code:
root@freenas[~]# nvmecontrol devlist
nvme0: INTEL SSDPED1D960GAY
    nvme0ns1 (915715MB)
nvme1: HP SSD EX950 2TB
    nvme1ns1 (1907420MB)

I found the articles at https://pve.proxmox.com/wiki/PCI(e)_Passthrough and https://pve.proxmox.com/wiki/Pci_passthrough and it seems like there are quite a few steps.

Based on the hardware and above output - does this look PCI passthrough will work for this use-case?

Also - I've done disk pass-through before, and it was reasonably easy to setup, and seemed to work through. Is it worth going through the added steps of doing PCIe passthrough?
 
Hi,
I read earlier that PCI passthrough is safer than disk passthrough when using ZFS - is this still the case?
It is always good if ZFS has the Disk HW under control.
I personally would use PCE passthrough.
Based on the hardware and above output - does this look PCI passthrough will work for this use-case?
I guess it will not work to pass through the SATA controller, but you can try.