Processor: Intel 8700k
Mobo: Asus Z390-Prime A
VT-d; ON
VTx: ON
SR-IOV: ON
I'm trying to get VMs access to these Mellanox ConnectX-4 NICs. But, when I add the hardware to a VM, I get a warning saying that IOMMU is not enabled. So, I followed the documentation's suggestion to edit /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on". I also added the modules:
However, after reboot, the host is no longer able to write to the NVMe drive it's installed onto and it goes into what is basically a cyclical read loop. The only way to recover from this is to boot into the Proxmox RE and remove the above changes.
I actually have the NVMe drive Proxmox is installed on connected via a multi-NVMe add-in-card ("Hyper M.2") supported by my motherboard. I suppose the issue is that Proxmox is trying to virtualize that pcie drive right from the jump as soon as IOMMU is properly enabled and it's confusing itself. I'm not really sure what to do about that. I'm about to try "iommu=pt" parameter, but I'm a little confused about how to add it. Is it option 1) or 2) or 3)?
Also, while researching this issue, I discovered this:
That...seems bad?
Mobo: Asus Z390-Prime A
VT-d; ON
VTx: ON
SR-IOV: ON
I'm trying to get VMs access to these Mellanox ConnectX-4 NICs. But, when I add the hardware to a VM, I get a warning saying that IOMMU is not enabled. So, I followed the documentation's suggestion to edit /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on". I also added the modules:
add to /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
However, after reboot, the host is no longer able to write to the NVMe drive it's installed onto and it goes into what is basically a cyclical read loop. The only way to recover from this is to boot into the Proxmox RE and remove the above changes.
I actually have the NVMe drive Proxmox is installed on connected via a multi-NVMe add-in-card ("Hyper M.2") supported by my motherboard. I suppose the issue is that Proxmox is trying to virtualize that pcie drive right from the jump as soon as IOMMU is properly enabled and it's confusing itself. I'm not really sure what to do about that. I'm about to try "iommu=pt" parameter, but I'm a little confused about how to add it. Is it option 1) or 2) or 3)?
Code:
1) GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
2) GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=pt"
3) GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt"
Also, while researching this issue, I discovered this:
Code:
root@pve:~# dmesg | grep -e DMAR -e IOMMU
[ 0.007801] ACPI: DMAR 0x0000000089C66140 0000A8 (v01 INTEL EDK2 00000002 01000013)
[ 0.007830] ACPI: Reserving DMAR table memory at [mem 0x89c66140-0x89c661e7]
[ 0.121599] DMAR: Host address width 39
[ 0.121600] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[ 0.121604] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[ 0.121606] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[ 0.121608] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[ 0.121610] DMAR: RMRR base: 0x0000003e2e0000 end: 0x0000003e2fffff
[ 0.121611] DMAR: [Firmware Bug]: No firmware reserved region can cover this RMRR [0x000000003e2e0000-0x000000003e2fffff], contact BIOS vendor for fixes
[ 0.121614] DMAR: [Firmware Bug]: Your BIOS is broken; bad RMRR [0x000000003e2e0000-0x000000003e2fffff]
[ 0.121616] DMAR: RMRR base: 0x0000008b800000 end: 0x0000008fffffff
[ 0.121618] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
[ 0.121619] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[ 0.121620] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[ 0.123171] DMAR-IR: Enabled IRQ remapping in x2apic mode
That...seems bad?
Last edited: