Hey everyone — hope this kind of post is allowed!
We’re in the process of migrating from VMware to Proxmox, and so far things are going great. We’re a small shop with 3 production hosts and 3 lab hosts.
iSCSI connectivity to our HPE Nimble arrays has been rock-solid, which was my biggest concern.
Where I’m still a little unsure is the VM networking design.
In VMware, we had a simple standard vSwitch on each host with a bunch of VLANs trunked through. Each port group represented a VLAN, and the uplinks were configured to “allow all VLANs.”
One special case was our “Servers” network, which used VLAN ID 4095 — VMware’s “all VLANs” passthrough mode. That allowed certain VMs to communicate across multiple VLANs freely.
VMWare vSwitch:


I’ve been experimenting with Proxmox SDN and VLAN-aware bridges.
Each host has a NIC dedicated to VM traffic (eno1), trunked on the switch side with all VLANs allowed.
Here’s the current setup:
SDN VLANS:

Network Interfaces (identical between all hosts)

So far, this setup is working great in the lab — I just want to make sure I’m building it the right way before we replicate it in production.
Any advice, example configs, or feedback from others who migrated from VMware vSwitches (especially with VLAN 4095 setups) would be much appreciated. Thanks! Im likely just severely overthinking all of this
I might make a seperate post later to validate my iSCSI/multipathing configuration with our HPE Nimbles.
We’re in the process of migrating from VMware to Proxmox, and so far things are going great. We’re a small shop with 3 production hosts and 3 lab hosts.
iSCSI connectivity to our HPE Nimble arrays has been rock-solid, which was my biggest concern.
Where I’m still a little unsure is the VM networking design.
Our VMware setup
In VMware, we had a simple standard vSwitch on each host with a bunch of VLANs trunked through. Each port group represented a VLAN, and the uplinks were configured to “allow all VLANs.”
One special case was our “Servers” network, which used VLAN ID 4095 — VMware’s “all VLANs” passthrough mode. That allowed certain VMs to communicate across multiple VLANs freely.
VMWare vSwitch:


Our current Proxmox lab setup
I’ve been experimenting with Proxmox SDN and VLAN-aware bridges.
Each host has a NIC dedicated to VM traffic (eno1), trunked on the switch side with all VLANs allowed.
Here’s the current setup:
SDN VLANS:

Network Interfaces (identical between all hosts)

- eno1 → dedicated VM trunk interface
- vmbr1 → Linux bridge for VM traffic (VLAN aware)
- Using SDN to define each VLAN tag (101, 103, 22, 70, etc.)
My questions
- What’s the best way to replicate the VMware “VLAN 4095 / Allow All VLANs”concept in Proxmox?
- Is there an equivalent to “pass all VLANs through to this VM,” ?
- Should I be using Open vSwitch (OVS) instead of Linux bridges for this type of trunked multi-VLAN environment?
- For general VM traffic, is using SDN + VLAN-aware bridge the right approach — or should I be defining VLANs directly on vmbr1 instead?
So far, this setup is working great in the lab — I just want to make sure I’m building it the right way before we replicate it in production.
Any advice, example configs, or feedback from others who migrated from VMware vSwitches (especially with VLAN 4095 setups) would be much appreciated. Thanks! Im likely just severely overthinking all of this
I might make a seperate post later to validate my iSCSI/multipathing configuration with our HPE Nimbles.
Last edited: