[SOLVED] Switching from ESXi to Proxmox and have networking questions

bgatestmg

New Member
Oct 21, 2025
4
1
3
Hey everyone — hope this kind of post is allowed!


We’re in the process of migrating from VMware to Proxmox, and so far things are going great. We’re a small shop with 3 production hosts and 3 lab hosts.
iSCSI connectivity to our HPE Nimble arrays has been rock-solid, which was my biggest concern.


Where I’m still a little unsure is the VM networking design.




Our VMware setup​


In VMware, we had a simple standard vSwitch on each host with a bunch of VLANs trunked through. Each port group represented a VLAN, and the uplinks were configured to “allow all VLANs.”


One special case was our “Servers” network, which used VLAN ID 4095 — VMware’s “all VLANs” passthrough mode. That allowed certain VMs to communicate across multiple VLANs freely.

VMWare vSwitch:
Screenshot_2025-10-22_at_2.28.49_PM.webp
Screenshot_2025-10-22_at_2.28.58_PM.webp





Our current Proxmox lab setup​


I’ve been experimenting with Proxmox SDN and VLAN-aware bridges.
Each host has a NIC dedicated to VM traffic (eno1), trunked on the switch side with all VLANs allowed.
Here’s the current setup:
SDN VLANS:
Screenshot_2025-10-22_at_2.29.54_PM.webp

Network Interfaces (identical between all hosts)
Screenshot_2025-10-22_at_2.29.42_PM.webp

  • eno1 → dedicated VM trunk interface
  • vmbr1 → Linux bridge for VM traffic (VLAN aware)
  • Using SDN to define each VLAN tag (101, 103, 22, 70, etc.)

My questions​


  1. What’s the best way to replicate the VMware “VLAN 4095 / Allow All VLANs”concept in Proxmox?
    • Is there an equivalent to “pass all VLANs through to this VM,” ?
  2. Should I be using Open vSwitch (OVS) instead of Linux bridges for this type of trunked multi-VLAN environment?
  3. For general VM traffic, is using SDN + VLAN-aware bridge the right approach — or should I be defining VLANs directly on vmbr1 instead?


So far, this setup is working great in the lab — I just want to make sure I’m building it the right way before we replicate it in production.


Any advice, example configs, or feedback from others who migrated from VMware vSwitches (especially with VLAN 4095 setups) would be much appreciated. Thanks! Im likely just severely overthinking all of this


I might make a seperate post later to validate my iSCSI/multipathing configuration with our HPE Nimbles.
 
Last edited:
after more diggin of how the VMWare env was originally setup since it predates me, anything that was on the servers vlan i noticed actually on the switch side was trunk native vlan2, so i just added VLAN2 to our lab, and am importing a test vm from vmware to prox to ensure it behaves as it should. Have some iscsi oddities i noticed though when i added a new 'datastore' (stuck in vmware terms sorry) and i had to ssh to the hosts and run multipath -r and multipath -ll and then they went from unkown on two hosts to useable. I probably have something config'd wrong but this is why we lab.

We plan to buy full support once we are closer to ready to going production, but felt it diddnt make sense to license the lab and our soon to be new prod cluster when i migrate workload from vmware to the lab, format existing servers, then migrate workload back to prod env.
 
  • Like
Reactions: Johannes S
We had the same challenge with VMware trunk-all VLAN 4095. After some research we came to the conclusion that it is not yet possible with SDN VLANs and spent an afternoon adding all previous used VLANs. From a design perspective it is also better to make all your VLANs visible so that there are no forgotten VLANs being used.
 
  • Like
Reactions: bgatestmg

  1. What’s the best way to replicate the VMware “VLAN 4095 / Allow All VLANs”concept in Proxmox?
    • Is there an equivalent to “pass all VLANs through to this VM,” ?
use the vmbrX directly . (Not sure why it's vlan4095 on vmware ? is it hardcorded 4095 trick in vmware to allow all vlans?)

  1. Should I be using Open vSwitch (OVS) instead of Linux bridges for this type of trunked multi-VLAN environment?
no

  1. For general VM traffic, is using SDN + VLAN-aware bridge the right approach — or should I be defining VLANs directly on vmbr1 instead?
Both are the same, choose what you prefer. (sdn allow more complex setup like vxlan,evpn,... but for simple vlan, this is the same)
 
  • Like
Reactions: bgatestmg
use the vmbrX directly . (Not sure why it's vlan4095 on vmware ? is it hardcorded 4095 trick in vmware to allow all vlans?)


no


Both are the same, choose what you prefer. (sdn allow more complex setup like vxlan,evpn,... but for simple vlan, this is the same)
Yea i think its a trick vmware does to allow all vlans, but figured out how it was actually being used in this environment (pretty bad practice being performed tbh ) so changed it up and simply added the VLAN to SDN, worked like a charm, i overthought that way too much.


What would be use cases where OVS actually becomes necessary or useful? I doubt our env will require it but peaks my curiosity.

Ahhhh gotcha that makes sense, i think i personally prefer SDN a bit more, seems cleaner in my opinion.
 
10 year ago, ovs had features that linux bridge didn't have (vlan-aware for example). But today, maybe the only interesting feature they have is port-mirroring.

The whole sdn stack don't use ovs at all (including for vxlan,evpn,...).

I don't like ovs personnaly, because you have an userland daemon, and if it's crashing, no network anymore ... And in the past, they was also network interruption on service upgrade/restart.
 
Ahhhhhhhhhhhhh i see now, that makes perfect sense! Thanks a ton for letting me pick youre brain!

Now to write up a post about a small ISCSI/multipath issue i think im having(maybe its not really a issue though) but dont want to contaminate one thread with too many different topics.

Specifically iSCSI with HPE Nimbles and LVM over ISCSI