OPNsense PCI Passthrough NICs Proxmox v7.0-11

mle

Member
Aug 5, 2021
78
16
13
32
Austria
Hy people

I ran into a problem.

First of all, some information on the side:
I know how PCI passthrough works under Proxmox.
All parameters and modules that are required on my hardware are set and loaded.
The device is already unmounted using a script and a cronjob during the start of Proxmox and bound to the VirtIO driver.
The IOMMU groups also fit.
Passed PCIe devices were tested in a Win Server 2019 VM.
In general, I can say that passing it through to the VM works without any problems.
Even after a restart, hard reset or similar of the VM.
Also the driver installation and the general function that a NIC has to perform.

So OPNsenes ISO worried and started.
All VirtIO NICs are visible but the PCIe passed NICs are not.
But I see on the switch that the NICs start.

So first google and use the forum search xD.

So this should be a problem with BSD 12 and no longer occur under BSD 13. (OPNsense, PFsense, both currently still BSD 12.xx)

The solution is to switch to i440fx or use q35 v3.1. (Not tested)

I cannot use the PCIe function with the i440fx.
With q35 v3.1 I am not very knowledgeable about whether that brings any performance, stability and / or security concerns with it.

Both are somehow not really satisfactory.

So test wise changed to pfsense.

After a few tests, I did not see any errors. e.g. NICs are visible get a DHCP lease etc.

So I only get errors under OPNsense. (although I have read again and again in most forums that there is the same problem under pfsense)

Hadware:
Intel Corporation 82571EB / 82571GB Gigabit Ethernet Controller (4x 1GB NIC)

Settings:
hostpci0: 0000: 10: 00, pcie = 1, rombar = 0

Thus 2x NICs 10:00:00 and 10:00:01 are passed through by the 4x.
2x NICs are in the same IOMMU group. But it should also fit because I want to pass through 2x NICs. (Multi-WAN)
No errors occurred under WIN Server.

YES I only want to connect my WAN NICs to the VM in connection with PCI passthrough and not use a Linux bridge for the WAN.

Are you aware of these problems and what solutions do you use?

I look forward to your answers and suggestions for solutions.
 
My advise is not to worry and use a configuration that works. Or wait for OPNSense to use BSD 13. Or try multiple configuration (duplicate the VM with changed settings) and measure (that what really matters to you) and make an informed decision between the available work-arounds. In my experience with GPUs there is no noticeable performance difference between virtual PCI and virtual PCIe. Maybe you can be more specific about why those, hopefully temporary, work-arounds are "not really satisfactory" and we can focus on those?
 
Thank you for your prompt reply.

I also use the German forum:

https://forum.proxmox.com/threads/opnsense-pci-passthrough-nics-proxmox-v7-0-11.94614/

Since I was already explained about the PCIe function that it makes no difference in terms of performance.
It is also just a flag that is required for some drivers under windows and other such things.

Unfortunately I have an understanding problem what the virtual chipset is all about. (i440fx, q35)

I know that the i440fx is more likely to be used with Windows and Q35 with Linux-based systems.

But I don't know exactly where the differences are except for the one with the PCIe flag.

In general I can only say that the general system performance under q35 feels better on my systems.
Is also visible in the benchmarks adber it is almost in the measurement tolerance.

I am very familiar with Windows and Linux systems, but not with UNIX and therefore cannot perform meaningful benchmarks.

It would be great if someone could explain the difference between the i440fx and the q35 in more detail. (I tried to find out more online but couldn't find any meaningful information about it other than the intended use.)

In any case, the advantages and disadvantages would be interesting.
and with which UNIX generally behaves better

Software:
Proxmox v7.0.-11
OPNsense v21.7.1 x64

Tests:
q35 v3.0-2.10 PCIe ON OFF Not Tested
q35 v3.1 PCIe ON Working
q35 v3.1 PCIe OFF Not Tested
q35 v4.0 PCIe ON Non Working
q35 v4.0 PCIe OFF Not Tested
q35 v4.0.1 PCIe ON Non Working
q35 v4.0.1 PCIe OFF Not Tested
q35 v4.2 PCIe ON Non Working
q35 v4.2 PCIe OFF Not Tested
q35 v4.2 PCIe ON Non Working
q35 v4.2 PCIe OFF Not Tested
q35 v5.0 PCIe ON Non Working
q35 v5.0 PCIe OFF Not Tested
q35 v5.1 PCIe ON Non Working
q35 v5.1 PCIe OFF Not Tested
q35 v5.2 PCIe ON Non Working
q35 v5.2 PCIe OFF Not Tested
q35 v6.0 PCIe ON Non Working
q35 v6.0 PCIe OFF Working

i440fx v6.0 PCIe OFF Working

Should only go to the info of the visibility of the NICs. You always got a DHCP lease when it was visible, but did not carry out any further tests.
 
I did some tests yesterday. (Win Server 2019)

At the moment I can only say that with Proxmox v7.0-11 the benchmarks behave very well even under the i440fx v6.0 chipset.

But I still have the problem that the PCI passthrough behaves strangely under i440fx. Passing through is not a problem in itself, but the driver installation then uses a completely different driver with which the NIC does not work. I can change it by hand but it is somehow not the real thing because it happens that after a restart the device is no longer recognized.

Have now passed both NICs individually then the behavior with the driver is better but still happens.

So let's assume that both NICs are recognized correctly then something still doesn't quite fit. RDP is very slow, with q35 v6.0 even without a PCIe flag I don't have these problems.

Of course I checked the MAC of the 2x NICs and they fit.

So I just wanted to post it as info, I'll do some more tests when I'm home from work.

Thank you for your support and your great software.
 
Hi guys
Here is the addendum to my final setup.

Hadware:
Dell R320 Server (SR-IOV Enabled)
Intel Corporation 82571EB / 82571GB 4x NIC (2x NICs PCI passthrough for multi-WAN)

Software:
Proxmox v7.0.-11 Qemu v6
OPNsense v21.7.1 x64

VM:
Chipset: q35 v6.0 (According to OPNsense documentation there are only problems with Qemu v5 with v6 I don't have any more problems) (PCIe: flag = Disable)
CPU: Type suitable for my system (e.g. SandyBridge-IBRS)
CPU flags: + md-clear; + pcid; + spec-ctrl; -ssbd; -ibpb; -virt-ssbd; -amd-ssbd; -amd-no-ssb; + pdpe1gb; -hv-tlbflush; -hv -evmcs; + aes (may differ with other systems) (at least aes should be supported)
UEFI: Enabled
Storage: Virt-IO SCSI incl. SSD emulation
PCI passthrough: All functions without ROM bar and without PCI-E
NICs: 2x PCI passthrough, all other Virt-IO without FireWall settings from Proxmox

Options:
Tablet input: Disable
SPICE: Disable
OS type: Other
RTC: Enabled
Hotplug: none
ACPI: Enabled
KVM: Enabled
Qemu Agent: Activated (gives a plugin under OPNsense tested and working)

Have now carried out a lot of tests.
There are no more abnormalities.

Performance: OK
Data throughput and packet time OK
Resetting and stopping and restarting the VM no problems.

The only thing I found that didn't work is Snapshots ink. RAM
Then the PCI passthrough NICs that are passed through behave strangely.

Restarting the VM corrects the problem.

Snapshots without RAM or backups don't pose any problems.

Unfortunately, I cannot use the i440fx because 1x of the 2x NICs always behave strangely for me, without PCI passthrough there should be no problems

Thank you for your help and hope that I can help others who have problems with OPNsense under Proxomx.

German Forum Post:
https://forum.proxmox.com/threads/opnsense-pci-passthrough-nics-proxmox-v7-0-11.94614/
 
Hi @mle, thanks for your testing in the previous posts.
I assume you too have upgraded to newer OPNsense versions, that now are based on freebsd 13 and 13.1 in the most recent.
Recently I dug up this post and remembered that I still had q35-3.1 in the config, as I experienced the same issues as you did back then.
So I switched to q35-latest.

proxmox: pve-manager/7.2-7/d0dd0e85 (running kernel: 5.15.39-1-pve)
machine: q35
hostpci0: 0000:02:00.0,pcie=1,rombar=0 (not All functions pass through, just single port)
hostpci1: 0000:02:00.1,pcie=1,rombar=0 (not All functions pass through, just single port)

I did the switch in q35 config on OPNsense 22.1.10 and now running 22.7.1. No issues.
Perhaps you could share your experience in an update and see if we can mark this issue as solved.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!