Hi again,
I am still tinkering with the project of a TrueNAS VM on PVE 8.1.4. using a HP DL380 G10 with a PCIe/NVMe/U2 riser card and cage. I already posted here with not much success before getting some more info at the TrueNAS forums over here.
It turned out that some people there would (despite the official recommendation) discourage Proxmox as a hypervisor for TrueNAS. The reason is that there seem to be occasional ZFS scans / automatic imports happening on the PVE hosts which inevitably result in more or less complete data loss. This might even have happened using throughpassed typical TrueNAS IT Mode HBAs.
As I understand PCI passthrough with Proxmox - this would have only been possible in the case of these drives being directly accessible to the host. Which, as I described, I would like to safely avoid for exact that reason.
When using early bindings to the vfio-pci drivers to solve this I'd have the following questions:
- how could I make sure the drives are 100% and forever unavailable to the host?
edit: just found this in the Proxmox Wiki: # lspci -nnk will list which kernel driver is in use. This should be vfio-pci then I guess. But is this safe then?
- might the problems which have been described in the TrueNAS forum's thread be related to this?
- are these the correct drivers I would have to softdep?
- in my case I am already using 3 NVMe drives directly in the Proxmox host as ZFS VM storage. Is it possible at all to make a separation between those drives and the ones I want to passthrough?
Thanks for your help!
I am still tinkering with the project of a TrueNAS VM on PVE 8.1.4. using a HP DL380 G10 with a PCIe/NVMe/U2 riser card and cage. I already posted here with not much success before getting some more info at the TrueNAS forums over here.
It turned out that some people there would (despite the official recommendation) discourage Proxmox as a hypervisor for TrueNAS. The reason is that there seem to be occasional ZFS scans / automatic imports happening on the PVE hosts which inevitably result in more or less complete data loss. This might even have happened using throughpassed typical TrueNAS IT Mode HBAs.
As I understand PCI passthrough with Proxmox - this would have only been possible in the case of these drives being directly accessible to the host. Which, as I described, I would like to safely avoid for exact that reason.
When using early bindings to the vfio-pci drivers to solve this I'd have the following questions:
- how could I make sure the drives are 100% and forever unavailable to the host?
edit: just found this in the Proxmox Wiki: # lspci -nnk will list which kernel driver is in use. This should be vfio-pci then I guess. But is this safe then?
- might the problems which have been described in the TrueNAS forum's thread be related to this?
- are these the correct drivers I would have to softdep?
nvme_fabrics
nvme
nvme_core
nvme_common
- in my case I am already using 3 NVMe drives directly in the Proxmox host as ZFS VM storage. Is it possible at all to make a separation between those drives and the ones I want to passthrough?
Thanks for your help!
Last edited: