My homelab machine is as follows:
The last 2 years I have tried Hyper-V, Xen server and ESXi on which I settled for about a year. I was able to eventually enable and use pass-through on all of them.
For the last couple of weeks I started experimenting with Proxmox.
I installed Proxmox 5.3 and successfully enabled IOMMU.
I created two VMs, one FreeNAS and one Windows 7. The FreeNAS has a sata controller pass-through and worked as expected.
I was able to pass almost any device I cared for to any VM.
The system was rock stable for days up until the point that I created a new VM for pfSense.
My motherboard has 2 Gibabit NICs:
the network interfaces are named enp6s0f0 and enp6s0f1 respectively.
I was using enp6s0f0 for connecting with my lan and for proxmox web administration.
I though to try and pass the other NIC (the one corresponding to enp6s0f1) to the newly created pfSense VM to act as it's WAN interface.
By starting the pfSense VM I lost connection to the hypervisor. So, I went and manually rebooted it.
I still had no connection to the server after the reboot. The network interfaces (both) were no longer showing with "ip link show" but were still showing with lspci.
I deleted the pfSense VM and rebooted. My connection to the server got restored and now "ip link show" showed them correctly.
From that point on. IOMMU seems to be disabled.
The VMs with pci pass-through devices fail to start with the error "IOMMU not present"
When editing any of the VMs pci pass-through hardware I now get "No IOMMU detected, please activate it. See Documentation for further information.".
In the same dialog the "Device" drop down menu shows all IOMMU groups set to "-1" while the
IDs do still contain the letters (a,d,f) that indicate IOMMU capability (if I'm not mistaken).
Even though:
1) grub boot option is still set to iommu=on:
2) /etc/modules does still contain the modules needed:
3) the modules still do load:
4) relating kernel messages are as follows:
Any ideas? Thanks in advance.
Code:
Motherboard: Supermicro X8DTI-F
CPU : 2x Xeon E5645
RAM : 64GB ECC
The last 2 years I have tried Hyper-V, Xen server and ESXi on which I settled for about a year. I was able to eventually enable and use pass-through on all of them.
For the last couple of weeks I started experimenting with Proxmox.
I installed Proxmox 5.3 and successfully enabled IOMMU.
I created two VMs, one FreeNAS and one Windows 7. The FreeNAS has a sata controller pass-through and worked as expected.
I was able to pass almost any device I cared for to any VM.
The system was rock stable for days up until the point that I created a new VM for pfSense.
My motherboard has 2 Gibabit NICs:
Code:
06:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
06:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
the network interfaces are named enp6s0f0 and enp6s0f1 respectively.
I was using enp6s0f0 for connecting with my lan and for proxmox web administration.
I though to try and pass the other NIC (the one corresponding to enp6s0f1) to the newly created pfSense VM to act as it's WAN interface.
By starting the pfSense VM I lost connection to the hypervisor. So, I went and manually rebooted it.
I still had no connection to the server after the reboot. The network interfaces (both) were no longer showing with "ip link show" but were still showing with lspci.
I deleted the pfSense VM and rebooted. My connection to the server got restored and now "ip link show" showed them correctly.
From that point on. IOMMU seems to be disabled.
The VMs with pci pass-through devices fail to start with the error "IOMMU not present"
When editing any of the VMs pci pass-through hardware I now get "No IOMMU detected, please activate it. See Documentation for further information.".
In the same dialog the "Device" drop down menu shows all IOMMU groups set to "-1" while the
IDs do still contain the letters (a,d,f) that indicate IOMMU capability (if I'm not mistaken).
Even though:
1) grub boot option is still set to iommu=on:
Code:
GRUB_CMD_LINE_LINUX_DEFAULT="quiet intel_iommu=on"
2) /etc/modules does still contain the modules needed:
Code:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
3) the modules still do load:
Code:
lsmod | grep vfio
vfio_pci 45056 0
vfio_virqfd 16384 1 vfio_pci
irqbypass 16384 2 vfio_pci,kvm
vfio_iommu_type1 24576 0
vfio 28672 2 vfio_iommu_type1,vfio_pci
4) relating kernel messages are as follows:
Code:
dmesg | grep -E 'IOMMU|DMAR'
[ 0.000000] ACPI: DMAR 0x00000000BF75E0D0 000128 (v01 AMI OEMDMAR 00000001 MSFT 00000097)
[ 0.000000] DMAR: Host address width 40
[ 0.000000] DMAR: DRHD base: 0x000000fbffe000 flags: 0x1
[ 0.000000] DMAR: dmar0: reg_base_addr fbffe000 ver 1:0 cap c90780106f0462 ecap f020fe
[ 0.000000] DMAR: RMRR base: 0x000000000e6000 end: 0x000000000e9fff
[ 0.000000] DMAR: RMRR base: 0x000000bf7ec000 end: 0x000000bf7fffff
[ 0.000000] DMAR: ATSR flags: 0x0
[ 0.000000] DMAR-IR: IOAPIC id 6 under DRHD base 0xfbffe000 IOMMU 0
[ 0.000000] DMAR-IR: IOAPIC id 7 under DRHD base 0xfbffe000 IOMMU 0
[ 0.000000] DMAR-IR: Enabled IRQ remapping in xapic mode
Any ideas? Thanks in advance.
Last edited: