I installed Proxmox 8 on a new AMD Ryzen 5 5500GT based machine with ASRock B550M Pro SE mb using UEFI boot with two 500GB drives in ZFS RAID 1.
Things generally work until I try to get PCI Passthrough working by adding "quiet iommu=pt" to /etc/kernel/cmdline, per this and other tutorials
Then I get the following error at boot time. It doesn't prevent the ultimate booting but does delay it and does suggest something I could need one day isn't going to work at some point, like maybe booting off mirror if main drive fails? Maybe something worse?
Yes, I have IOMMU and AMD-V/AMD SVM enabled in BIOS. Also, FWIW, this is after updating OS to latest version available to me as of right now (Linux 6.8.12-1-pve (2024-08-05T16:17Z)).
For a couple weeks I had the machine running with no errors with PCI Passthrough using Legacy boot and GRUB. I re-installed to change to UEFI/systemd-boot after reading that use of GRUB is quite sub optimal with ZFS RAID 1 boot disks.
That said,I wasn't able to get separate isolated IOMMU groups for each device, as I read one is supposed to get with full proper PCI Passthrough. Despite this, though, the passed through LSI Host Bus Adapter (connecting SSDs for my testing) operated much faster when I had the machine configured with Legacy/GRUB/IOMMU=pt. The performance of the HBA is significantly lower now without PCI Passthrough enabled.
For what it's worth, my main intended use for this machine is for a TrueNAS VM. I will also run some some utilities, e.g. Unifi Controller and other services, that I was running in Jails on a TrueNAS box and elsewhere. The only thing I'm looking to Passthrough (at least right now) is the HBA.
Aside from the performance issue, I worry the fact the HBA is now shareable with other VMs to be a recipe for problems. The device should really only be usable/used by the TrueNAS VM.
What does the error mean, practically speaking?
Why does enabling IOMMU cause this?
What could be done to fix it?
Any info would be appreciated.
Things generally work until I try to get PCI Passthrough working by adding "quiet iommu=pt" to /etc/kernel/cmdline, per this and other tutorials
Code:
root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet iommu=pt
Then I get the following error at boot time. It doesn't prevent the ultimate booting but does delay it and does suggest something I could need one day isn't going to work at some point, like maybe booting off mirror if main drive fails? Maybe something worse?
Yes, I have IOMMU and AMD-V/AMD SVM enabled in BIOS. Also, FWIW, this is after updating OS to latest version available to me as of right now (Linux 6.8.12-1-pve (2024-08-05T16:17Z)).
For a couple weeks I had the machine running with no errors with PCI Passthrough using Legacy boot and GRUB. I re-installed to change to UEFI/systemd-boot after reading that use of GRUB is quite sub optimal with ZFS RAID 1 boot disks.
That said,I wasn't able to get separate isolated IOMMU groups for each device, as I read one is supposed to get with full proper PCI Passthrough. Despite this, though, the passed through LSI Host Bus Adapter (connecting SSDs for my testing) operated much faster when I had the machine configured with Legacy/GRUB/IOMMU=pt. The performance of the HBA is significantly lower now without PCI Passthrough enabled.
For what it's worth, my main intended use for this machine is for a TrueNAS VM. I will also run some some utilities, e.g. Unifi Controller and other services, that I was running in Jails on a TrueNAS box and elsewhere. The only thing I'm looking to Passthrough (at least right now) is the HBA.
Aside from the performance issue, I worry the fact the HBA is now shareable with other VMs to be a recipe for problems. The device should really only be usable/used by the TrueNAS VM.
What does the error mean, practically speaking?
Why does enabling IOMMU cause this?
What could be done to fix it?
Any info would be appreciated.