I have to test it... What kind of NICs do you use, passthrough and/or vmbr bridges?When I add a second network-device, then opnsense does not boot anymore.
I configured Intel-E1000 for all network-devices of the VM/OPNsense.Are you using VirtIO or e1000 as the model for your vmbr? VirtIO is what I would recommend, for both speed and reliability. Maybe consider giving the latest PVE 5.11 kernel a try and see if booting from that helps.
https://forum.proxmox.com/threads/kernel-5-11.86225/
More than one virtual network-device can be assigned to a Linux-Bridge, hooked to a Port/Slave (NIC). This way many VMs can use a single NIC in parallel. And I can reconfigure this quite quickly and flexible, especially if You additionally use OPNsense for task not offered by PVE. For example, I can test to hook a server in a VM to the NIC-linux-bridge (opnsense: WAN) and test it, then quickly hook it to the virtual-linux-bridge (opensense: LAN) and test it again and proof, if my opnsense-configuration works well. I guess reconfiguring a passed-through-NIC is not done so quickly and is more error-prone.I would like to know if someone of you passthrough the NIC(s) to he OPNsense VM and how good it works?
I think a shared NIC is not that secure as a dedicated NIC for a firewall like OPNsense...What do You believe is the (main) advantage of passing through the NIC to a VM?
I agree if You use the NIC for an exposed uplink to the internet. But for a single uplink, I would prefer to use a router on a dedicated hardware, like OPNsense on zotac CI329 or comparable to not loose the connection when rebooting or crashing or malconfiguring the host.I think a shared NIC is not that secure as a dedicated NIC for a firewall like OPNsense...
Works well when your MB/Bios support it well and the Nic can be isolated. I like SR-IOV even more because you can pass through several copies of a single nic and not just one, again only when the NIC/BIOS/MB supports it well. I have used IOMMU, SR-IOV, basic linux bridges and tried openswitch as well with pfSense/OPNsense/VyOS. I was able to utilize all but openswitch well and that was likely because it was last on my list to try and by then I was too lazy to put in the effort given the ease of Linux bridges. Finally settled on VMBR linux bridges because I have not found anything but theoretical security issues when researching (even for WAN), the speed was fine for my 1gb symmetrical connection coupled with the ease of use and built-in Proxmox support.I would like to know if someone of you passthrough the NIC(s) to he OPNsense VM and how good it works?
Yeah, above 1Gbit is hard. My 10Gbit NIC using virtio is only working at around 1.1 to 1.2 Gbit. But I'm not sure how much SR-IOV or IOMMU would improve that. As far as I understand OPNsense needs disabled hardware offloading even if you are not virtualizing the NIC so every packets needs to be processed by the CPU and that is bottlenecking the speed.No security reason I have run across to use passthrough or SRIOV for LAN. Speed, if connecting and utilizing above 1g) is another matter.
Check the linked thread below. The I219-LM coupled with the Intel-E1000 seems likely the issue.I configured Intel-E1000 for all network-devices of the VM/OPNsense.
I changed all four to VirtIO, still running.
So I do not have any need to test it with another kernel.
Thanx for Your hints.