Best practice for VMware VSE replacement

eMarcus

New Member
Jul 31, 2025
2
1
3
Hello!

We are currently planning to migrate an environment from VMware to Proxmox. This environment has several NSX networks defined which are connected through a VSE (Virtual Service Edge) to an external transit VLAN. The main task the VSE is doing, is layer-3 routing.

What would be the best solution to map this to Proxmox V9.0?

Any hints are welcome!
Thanks,
Marcus.
 
  • Like
Reactions: amauro
Hello!

We are currently planning to migrate an environment from VMware to Proxmox. This environment has several NSX networks defined which are connected through a VSE (Virtual Service Edge) to an external transit VLAN. The main task the VSE is doing, is layer-3 routing.

What would be the best solution to map this to Proxmox V9.0?

Any hints are welcome!
Thanks,
Marcus.
Have a look at proxmox's SDN and the EVPN elements. EVPN is what an NSX network really is.
 
We are currently evaluating a few options:
1.) deploy simple containers as VSE replacement. The containers are plain linux, just routing one vnet to a VLAN.
-) advantage: light weight and easy to deploy and manage
-) disadvantage: harder to implement high availability with containers

2.) deploy a single VM per cluster which connects all Vnets of that cluster and routes them to their destination VLANs
-) advantage: easy to implement HA in the cluster for that VM
-) disadvantage: without firewall rules, the VM will route between Vnets (which might not be allowed). Firewall rules are under certain processes and controls and are managed by our security group. So that would be hard to implement from an organizational aspect.

3.) We would somehow prefer an easy solution with Proxmox built in functionality. However, it seems that we just know too little about all the new SDN/VXLAN/EVPN features. We thought about using evpn and exit nodes. But we are not sure if the exit nodes would provide the required functions, so we hoped for a best practice ;-)

bye
Marcus.
 
very same topology, but all in host (no VM doing routing here) you can have SDN overlay, multiple VRF if required.

In my experience, external BGP peering should be done manually to accommodate VRF separation. (the included assistant just merges everything in the default routing table).

route leaking between VRF/tenants should be done by an external network element (the one you are peering to).