Need to remove PCI passthrough setting from HOST

NOiSEA

New Member
Jan 6, 2023
16
10
1
I attempted to passthrough a PCI SATA controller to my TrueNAS VM, but it did not work (I think because the VM is SeaBIOS and I need it to be EUFI) I changed the settings back on the VM via the GUI so that it no longer has the PCI passthrough.

However, now the VM will not start because the drives for the VM do not show in Proxmox (because the drives were connected to the PCI SATA controller)

How can I get the Host to stop passing through the PCI device? I have already changed the setting for the VM and the VM does not show passthrough under /etc/pve/qemu-server/
 
where do I change the binding to vfio-pci? that is probably the issue. Unfortunately since I did this through the GUI, I am a little lost on where to find this setting in the CLI.

I am in the /etc/modprobe.d/ folder - there is nothing in vfio.conf and the pve-blacklist.conf only has the blacklist for the gpu which I need to keep.
 
Last edited:
where do I change the binding to vfio-pci? that is probably the issue. Unfortunately since I did this through the GUI, I am a little lost on where to find this setting in the CLI.
It's not possible via the GUI, so you probably did not do it. Otherwise, it's in a .conf file in the /etc/modprobe.d/ directory (see the manual for example).

Maybe show the output of cat /proc/cmdline and grep -r '' /etc/modprobe.d/ to see if there is anything left that might prevent the normal driver of your SATA controller from loading? Maybe also lspci -nnks PCI_ID_OF_THE_SATA_CONTROLLER.
 
  • Like
Reactions: NOiSEA
grep -r '' /etc/modprobe.d/
gave me this (but it all looks related to my gpu passthrough:
/etc/modprobe.d/kvm.conf:#This might only be needed for GVT-g Split for iGPU passthrough /etc/modprobe.d/kvm.conf:#options kvm ignore_msrs=1 /etc/modprobe.d/iommu_unsafe_interrupts.conf:options vfio_iommu_type1 allow_unsafe_interrupts=1 /etc/modprobe.d/blacklist.conf:blacklist nouveau /etc/modprobe.d/blacklist.conf:blacklist nvidia /etc/modprobe.d/blacklist.conf:blacklist snd_hda_intel /etc/modprobe.d/vfio.conf:#This is the pci address for the iGPU - probably will never need this but keeping in case /etc/modprobe.d/vfio.conf:#options vfio-pci ids=8086:9bc8 /etc/modprobe.d/vfio.conf: /etc/modprobe.d/pve-blacklist.conf:# This file contains a list of modules which are not supported by Proxmox VE /etc/modprobe.d/pve-blacklist.conf: /etc/modprobe.d/pve-blacklist.conf:# nvidiafb see bugreport https://bugzilla.proxmox.com/show_bug.cgi?id=701 /etc/modprobe.d/pve-blacklist.conf:blacklist nvidiafb

lspci -nnks 04:00.0 gave me this:

04:00.0 SATA controller [0106]: ASMedia Technology Inc. Device [1b21:1064] (rev 02) Subsystem: ZyDAS Technology Corp. Device [2116:2116] Kernel driver in use: ahci Kernel modules: ahci

that is my device but I don't see anything in that that I can use.
 
Huh... now its working but I haven't changed anything.
Thank you for your help. I am not sure what happened but my drives are now showing in Proxmox and the VM can boot up again.