[SOLVED] OPNsense 21.1 on PVE 6.4

peterka

Member
Dec 23, 2020
42
2
13
60
I installed OPNsense 21.1 into a VM on PVE 6.4. When I add a second network-device, then opnsense does not boot anymore.
 
I found a workaround: (THIS IS NOT A GOOD WORKAROUND)
1. clone VM with os (opnsense 21.1)
2. in this clone, reset os to factory default
3. clone VM/os a second time
4. under PVE "Hardware" add network device to second VM/os-clone
5. boot second VM/os clone
6. I did not test to import old os-configuration
 
Last edited:
I used "Linux Bridge" = "vmbr".
Many other changes to the VM/os-configuration lead to a non-boot as well.
 
weird within the last month I set up 2 instances of OPNsense 21.1 on separate proxmox 6.4 nodes and did not have any issue booting up or using the linux bridges after adding up to 4 network interfaces.
 
Last edited:
ERRORS (from Tasks list, vmbr3 does exist)

VM 127 not running
TASK ERROR: Failed to run vncproxy.

AND:

bridge 'vmbr3' does not exist
kvm: network script /var/lib/qemu-server/pve-bridge failed with status 512
TASK ERROR: start failed: QEMU exited with code 1
 
8 x Intel(R) Xeon(R) CPU E3-1275 v6 @ 3.80GHz (1 Socket), 64GB RAM
Linux 5.4.114-1-pve #1 SMP PVE 5.4.114-1
pve-manager/6.4-6/be2fa32c
 
Now I cloned the VM/OPNsense 21.1, added one network-device ... and it starts without error. Very strange. I do not have any idea, why sometimes, adding a network-device leads to errors/no-start and in other cases it runs smothly.
 
NIC on-board: Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31)
description: Ethernet interface
product: Ethernet Connection (2) I219-LM
vendor: Intel Corporation
physical id: 1f.6
bus info: pci@0000:00:1f.6
logical name: enp0s31f6
version: 31
serial: 4c:52:62:a7:98:1c
size: 1Gbit/s
capacity: 1Gbit/s
width: 32 bits
clock: 33MHz
capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=3.2.6-k duplex=full firmware=0.8-4 latency=0 link=yes
multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:143 memory:ef300000-ef31ffff


NIC on PCIe-Card: Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
description: Ethernet interface
product: 82574L Gigabit Network Connection
vendor: Intel Corporation
physical id: 0
bus info: pci@0000:01:00.0
logical name: enp1s0
version: 00
serial: 00:1b:21:3a:e2:89
size: 1Gbit/s
capacity: 1Gbit/s
width: 32 bits
clock: 33MHz
capabilities: pm msi pciexpress msix bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autone
gotiation
configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=3.2.6-k duplex=full firmware=1.8-0 latency=0 link=
yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:19 memory:ef2c0000-ef2dffff memory:ef200000-ef27ffff ioport:e000(size=32) memory:ef2e0000-ef2e3fff memory:ef280
000-ef2bffff

The other two network-devices, which I configured, are Proxmox-internal without attached NICs (no Node:System:Network:Ports/Slaves)
 
Here OPNsense 21.1.4 and 21.1.5 are working with virtio NICs on PVE 6.3 + PVE 6.4 + FreeNAS 11.3-U4.1 + TrueNAS Core 12.0-U3.1.
 
Last edited:
Are you using VirtIO or e1000 as the model for your vmbr? VirtIO is what I would recommend, for both speed and reliability. Maybe consider giving the latest PVE 5.11 kernel a try and see if booting from that helps.


https://forum.proxmox.com/threads/kernel-5-11.86225/
I configured Intel-E1000 for all network-devices of the VM/OPNsense.
I changed all four to VirtIO, still running.
So I do not have any need to test it with another kernel.
Thanx for Your hints.
 
Last edited:
  • Like
Reactions: vesalius
I would like to know if someone of you passthrough the NIC(s) to he OPNsense VM and how good it works?
More than one virtual network-device can be assigned to a Linux-Bridge, hooked to a Port/Slave (NIC). This way many VMs can use a single NIC in parallel. And I can reconfigure this quite quickly and flexible, especially if You additionally use OPNsense for task not offered by PVE. For example, I can test to hook a server in a VM to the NIC-linux-bridge (opnsense: WAN) and test it, then quickly hook it to the virtual-linux-bridge (opensense: LAN) and test it again and proof, if my opnsense-configuration works well. I guess reconfiguring a passed-through-NIC is not done so quickly and is more error-prone.
The tradeoff of loosing some performance with virtual networking is quite low with modern CPUs, so I prefer virtual networking.
What do You believe is the (main) advantage of passing through the NIC to a VM?
 
I think a shared NIC is not that secure as a dedicated NIC for a firewall like OPNsense...
I agree if You use the NIC for an exposed uplink to the internet. But for a single uplink, I would prefer to use a router on a dedicated hardware, like OPNsense on zotac CI329 or comparable to not loose the connection when rebooting or crashing or malconfiguring the host.
 
I would like to know if someone of you passthrough the NIC(s) to he OPNsense VM and how good it works?
Works well when your MB/Bios support it well and the Nic can be isolated. I like SR-IOV even more because you can pass through several copies of a single nic and not just one, again only when the NIC/BIOS/MB supports it well. I have used IOMMU, SR-IOV, basic linux bridges and tried openswitch as well with pfSense/OPNsense/VyOS. I was able to utilize all but openswitch well and that was likely because it was last on my list to try and by then I was too lazy to put in the effort given the ease of Linux bridges. Finally settled on VMBR linux bridges because I have not found anything but theoretical security issues when researching (even for WAN), the speed was fine for my 1gb symmetrical connection coupled with the ease of use and built-in Proxmox support.

No security reason I have run across to use passthrough or SRIOV for LAN. Speed, if connecting and utilizing above 1g) is another matter.
 
Last edited:
No security reason I have run across to use passthrough or SRIOV for LAN. Speed, if connecting and utilizing above 1g) is another matter.
Yeah, above 1Gbit is hard. My 10Gbit NIC using virtio is only working at around 1.1 to 1.2 Gbit. But I'm not sure how much SR-IOV or IOMMU would improve that. As far as I understand OPNsense needs disabled hardware offloading even if you are not virtualizing the NIC so every packets needs to be processed by the CPU and that is bottlenecking the speed.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!