VLAN (802.1Q) with Proxmox VE, OPNsense (as a VM in Proxmox), and UniFi systems

tl;dr:
I want to pipe VLANs around my networks, both the physical and virtual sections, to separate things like IoT out.

Some more detail:
I have a network consisting of Proxmox VE containers and VMs (one being OPNsense firewall router), and physical network components such as UniFi switches and APs. I want to have OPNsense handle the bulk of inter-VLAN routing, and want flexibility in whether VMs, containers and physical network equipment has access to a VLAN trunk (with either all or some of VLANs), or just a single VLAN. Not quite sure how to go about this on the Proxmox side (do I have aware bridges, or do I have a virtual NIC per VLAN, or do I use SDN? What are the pros and cons?).

I've read some of the doco - but it gets a little vague when it gets into the realms of selective trunking, and traditional Linux vs. SDN options.

I've read some of the threads here, but they tend to be about purely VM/container scenarios, and perhaps not as much connections out to the physical world.

Lots more detail:
I have Proxmox VE, running a mix of containers and virtual machines, connected through:

VMs:
  • OPNsense (WAN NIC is mapped in via PCI device, LAN out is virtual NIC into LAN proxmox bridge, then proxmox physical NIC)
  • HomeAssistant OS (virtual NIC atm into LAN proxmox bridge)
Containers:
  • tt-rss news reader (virtual NIC)
  • Unifi Controller (virtual NIC)
My initial thoughts are that I'd like to VLAN trunk out of the OPNsense firewall VM, and then:
  • VLAN trunk (incl. VLAN 1 untagged LAN) out of Proxmox physical NIC to appropriately configured switch equipment
  • Selective VLAN trunk into HomeAssistant (LAN + IoT vlans) OR multiple NICs, tapped off specific VLANs from firewall trunk
  • Specific VLANs into certain containers/VMs (say admin only into one, vs. perhaps security NVR into another)
It seems hard to figure out what the best approach is.. eg:
  • Have guest OSs handle all the VLAN tagging
  • Pre-tag guest NICs at hypervisor/host level, requiring a NIC per VLAN
  • If it is possible to mix and match firehose VLAN trunking with more selective VLAN access (following principal of only sending stuff to a guest that it needs)
It seems that some of the documentation suggests whilst a lot of settings can be done in GUI, some customisations are required to network interface, and that there's more than one way to do all this (Guest OS, Proxmox side Linux VLANs, Proxmox SDN). Happy to RTFM or blogs if there's a specific page. I see links above for mention of SDN and what the products do, but not much discussion of scenarios and ideal configurations.

PXL_20240325_040822875.jpg
 
Last edited:
Hello! I know it's been a bit so maybe you've found your way along.

This is basically exactly what I've done using OVS Bridges to basically hand over all four ports of a NIC to the OPNSense VM, and leaving the MoBo port for connecting directly to the PVE host if I need it. I did leave VLAN Aware set to off, and defined a vlan on each individual VM/container network setting. This way, OPNSense manages all the traffic, VLAN communication, and hosts all firewall rules.

The only physical devices I have connected to this are ones which can enforce VLANs by port/connection (controlled switch, wireless AP with multiple SSIDs set to specific VLANs for isolating guests, house-member devices, IoT, and security devices), and other devices connect specifically through those.

If you have any questions, feel free to let me know, and I'll see if I can answer. My setup has been running for ~7 years with no issues, so I've forgotten a lot of how i did certain things, but also took decent notes. I'm also in the middle of setting up a second PVE box (came across your post while googling things for that), so I'm doing some digging too. It seems like there may be a new option via SDN which I need to look into.
 
HI,
one question: how did you map the VLANS towards Opnsense? One interface or several? (I failed at this)
Can you provide your config?

Thanks
 
Last edited:
At the moment, this is what my interfaces file looks like:

Code:
auto lo
iface lo inet loopback

iface enp4s0 inet manual
#Ethernet 1 - Admin LAN

iface eno4 inet manual
#SFP+ 4

iface enp5s0 inet manual
        bridge-access 2
#Ethernet 2 - NVR

iface enp6s0 inet manual
#Ethernet 3

iface enp7s0 inet manual
#Ethernet 4

iface eno1 inet manual
#SFP+ 1

iface eno2 inet manual
#SFP+ 2

iface eno3 inet manual
#SFP+ 3

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.254/24
        gateway 192.168.1.1
        bridge-ports enp4s0 enp5s0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#Default admin bridge

source /etc/network/interfaces.d/*

My WAN is connected to enp8s0, but I've forwarded that through to my firewall VM directly using PCI passthrough so it's less likely any packets leak out in the hypervisor directly from WAN. I suspect it's not perfect - but it did seem weird to create a NIC for EVERY VLAN. Open to feedback.

My LAN port on Proxmox VE host goes into a VLAN aware switch port.
I have an OPNsense VM that NAT's WAN back to vmbr0.

I'd love to see a document somewhere with examples and relative pros and cons of:
  • Virtual NIC / Bridge for every VLAN
  • VLAN trunking NICs / Bridges
  • Using SDN / virtual switches
... the documentation I've found to date gives a lot of command syntax info, but not the why you'd use one over the other etc.
 
Last edited: