Hello everyone,
I am currently despairing of an actually simple installation.
the following configuration is required:
pfSense --> passthrough a dedicated NIC for the WAN interface
pfSense --> passthrough a dedicated NIC for the LAN interface
System:
12 x Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz (1 socket)
Mainboard (HP Stuff)
VT-d active
Virtualization active
NICS for passthrough: 2x Aquantia Corp. AQC107 NBase-T/IEEE 802.3bz Ethernet Controller (ASUS)
onboard NIC Intel Corporation Ethernet Connection (7) I219-LM
my boot manager is systemd:
Boot0009* Linux Boot Manager HD(2,GPT,5f1b3a90-cef6-4e0f-af49-fa90b93188ab,0x800,0x100000)/File(\EFI\systemd\systemd-bootx64.efi)....ISPH
accordingly, after the updates of the freshly installed PVE, I added the
at last everything was written to intiramfs with
Conclusion:
According to what I understand so far, this is correct and the passthrough should work so far.
To verify this, I set up a VM with Manjaro and the passthrough worked there.
Problem:
as soon as I try the passthrough with a pfSense VM pfSense does not recognize the NIC's. Accordingly I can't do anything after the boot of pfSense and get the error like in the picture below.
What did I do:
first I tried to change the VM settings
Linux kernel from 5.x - 2.6 kernel to Linux kernel 2.4
then separately and in combination
BIOS from SeaBios to UEFI
then separately and in combination
Machine type0 to q35
then separately and in combination all drive types
SCSI, IDE, SATA, etc...
Core always with 8 threads and 1 socket (CPU conditional)
Memory 8096
No Network interface (will passthrough the NIC)
Under Hardware via PCI both NIC's added with various combinations
All Function on and off
ROM-Bar on and off
PCI Expresse (q35) on and off
No improvement
next i created Linux bridges and added them
vmbr1 for WAN
vmbr2 for LAN (just for try)
with this I could finish the installation but pfSense in the dmesg didn't recognize the interfaces after that.
Re-installation from PVE 7.1 latest to PVE 7.0-11
went through everything again, same result.
afterwards I tried several modifications in
the configuration below I have tested with adding every single addition and saving with subsequent reboot. I also tested the above VM configurations again and again without success.
I have tried everything with both pfSense versions as listed below.
I have of course read through everything in the forums and also followed the instructions in the wiki and carried out. Can it be that there are any limitations with PFSense and this NIC? (is a 10 Gbps card).
Maybe someone can help me ?
I am currently despairing of an actually simple installation.
the following configuration is required:
pfSense --> passthrough a dedicated NIC for the WAN interface
pfSense --> passthrough a dedicated NIC for the LAN interface
System:
12 x Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz (1 socket)
Mainboard (HP Stuff)
VT-d active
Virtualization active
NICS for passthrough: 2x Aquantia Corp. AQC107 NBase-T/IEEE 802.3bz Ethernet Controller (ASUS)
onboard NIC Intel Corporation Ethernet Connection (7) I219-LM
my boot manager is systemd:
Boot0009* Linux Boot Manager HD(2,GPT,5f1b3a90-cef6-4e0f-af49-fa90b93188ab,0x800,0x100000)/File(\EFI\systemd\systemd-bootx64.efi)....ISPH
accordingly, after the updates of the freshly installed PVE, I added the
/proc/cmdline
file with intel_iommu=on
and also configured the etc/modules
as listed belowroot@lab2:~# cat /proc/cmdline
initrd=\EFI\proxmox\5.11.22-4-pve\initrd.img-5.11.22-4-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on
root@lab2:~# cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
at last everything was written to intiramfs with
update-initramfs -u
root@lab2:~# find /sys/kernel/iommu_groups -type l | sort -t '/' -n -k 5
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:02.0
/sys/kernel/iommu_groups/2/devices/0000:00:12.0
/sys/kernel/iommu_groups/3/devices/0000:00:14.0
/sys/kernel/iommu_groups/3/devices/0000:00:14.2
/sys/kernel/iommu_groups/4/devices/0000:00:16.0
/sys/kernel/iommu_groups/5/devices/0000:00:17.0
/sys/kernel/iommu_groups/6/devices/0000:00:1b.0
/sys/kernel/iommu_groups/7/devices/0000:00:1c.0
/sys/kernel/iommu_groups/8/devices/0000:00:1d.0
/sys/kernel/iommu_groups/9/devices/0000:00:1f.0
/sys/kernel/iommu_groups/9/devices/0000:00:1f.3
/sys/kernel/iommu_groups/9/devices/0000:00:1f.4
/sys/kernel/iommu_groups/9/devices/0000:00:1f.5
/sys/kernel/iommu_groups/9/devices/0000:00:1f.6
/sys/kernel/iommu_groups/10/devices/0000:01:00.0
/sys/kernel/iommu_groups/11/devices/0000:02:00.0
/sys/kernel/iommu_groups/12/devices/0000:03:00.0
root@lab2:~# journalctl -b 0 | grep -i iommu
Apr 05 10:13:26 lab2 kernel: Command line: initrd=\EFI\proxmox\5.11.22-4-pve\initrd.img-5.11.22-4-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on
Apr 05 10:13:26 lab2 kernel: Warning: PCIe ACS overrides enabled; This may allow non-IOMMU protected peer-to-peer DMA
Apr 05 10:13:26 lab2 kernel: Kernel command line: initrd=\EFI\proxmox\5.11.22-4-pve\initrd.img-5.11.22-4-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on
Apr 05 10:13:26 lab2 kernel: DMAR: IOMMU enabled
Apr 05 10:13:26 lab2 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
Apr 05 10:13:26 lab2 kernel: iommu: Default domain type: Passthrough (set via kernel command line)
Apr 05 10:13:26 lab2 kernel: pci 0000:00:00.0: Adding to iommu group 0
Apr 05 10:13:26 lab2 kernel: pci 0000:00:02.0: Adding to iommu group 1
Apr 05 10:13:26 lab2 kernel: pci 0000:00:12.0: Adding to iommu group 2
Apr 05 10:13:26 lab2 kernel: pci 0000:00:14.0: Adding to iommu group 3
Apr 05 10:13:26 lab2 kernel: pci 0000:00:14.2: Adding to iommu group 3
Apr 05 10:13:26 lab2 kernel: pci 0000:00:16.0: Adding to iommu group 4
Apr 05 10:13:26 lab2 kernel: pci 0000:00:17.0: Adding to iommu group 5
Apr 05 10:13:26 lab2 kernel: pci 0000:00:1b.0: Adding to iommu group 6
Apr 05 10:13:26 lab2 kernel: pci 0000:00:1c.0: Adding to iommu group 7
Apr 05 10:13:26 lab2 kernel: pci 0000:00:1d.0: Adding to iommu group 8
Apr 05 10:13:26 lab2 kernel: pci 0000:00:1f.0: Adding to iommu group 9
Apr 05 10:13:26 lab2 kernel: pci 0000:00:1f.3: Adding to iommu group 9
Apr 05 10:13:26 lab2 kernel: pci 0000:00:1f.4: Adding to iommu group 9
Apr 05 10:13:26 lab2 kernel: pci 0000:00:1f.5: Adding to iommu group 9
Apr 05 10:13:26 lab2 kernel: pci 0000:00:1f.6: Adding to iommu group 9
Apr 05 10:13:26 lab2 kernel: pci 0000:01:00.0: Adding to iommu group 10
Apr 05 10:13:26 lab2 kernel: pci 0000:02:00.0: Adding to iommu group 11
Apr 05 10:13:26 lab2 kernel: pci 0000:03:00.0: Adding to iommu group 12
Apr 05 10:13:26 lab2 kernel: intel_iommu=on
Conclusion:
According to what I understand so far, this is correct and the passthrough should work so far.
To verify this, I set up a VM with Manjaro and the passthrough worked there.
Problem:
as soon as I try the passthrough with a pfSense VM pfSense does not recognize the NIC's. Accordingly I can't do anything after the boot of pfSense and get the error like in the picture below.
What did I do:
first I tried to change the VM settings
Linux kernel from 5.x - 2.6 kernel to Linux kernel 2.4
then separately and in combination
BIOS from SeaBios to UEFI
then separately and in combination
Machine type0 to q35
then separately and in combination all drive types
SCSI, IDE, SATA, etc...
Core always with 8 threads and 1 socket (CPU conditional)
Memory 8096
No Network interface (will passthrough the NIC)
Under Hardware via PCI both NIC's added with various combinations
All Function on and off
ROM-Bar on and off
PCI Expresse (q35) on and off
No improvement
next i created Linux bridges and added them
vmbr1 for WAN
vmbr2 for LAN (just for try)
with this I could finish the installation but pfSense in the dmesg didn't recognize the interfaces after that.
Re-installation from PVE 7.1 latest to PVE 7.0-11
went through everything again, same result.
afterwards I tried several modifications in
/proc/cmdline
the configuration below I have tested with adding every single addition and saving with subsequent reboot. I also tested the above VM configurations again and again without success.
root@lab2:~# cat /proc/cmdline
initrd=\EFI\proxmox\5.11.22-4-pve\initrd.img-5.11.22-4-pve root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on vfio-pci.ids=1d6a:07b1 iommu=pt pcie_acs_override=downstream,multifunction
I have tried everything with both pfSense versions as listed below.
I have of course read through everything in the forums and also followed the instructions in the wiki and carried out. Can it be that there are any limitations with PFSense and this NIC? (is a 10 Gbps card).
Maybe someone can help me ?