Proxmox SDN Traffic breakout Interface and routing

Marco83

New Member
Feb 20, 2026
3
0
1
Hello everyone,

I have a question about the SDN stack in Proxmox. Currently, traffic in the EVPN/VXLAN networks breaks out via the host interface that has the default route. Is there an officially supported way to change or define which interface is used without manually editing route maps in the shell?

I’m sure I can somehow rebuild this by creating VRFs and redirecting the traffic via the routing tables accordingly. However, I think this approach is not officially supported and might also not be persistent across updates.

Has anyone got experience with this, or has implemented something similar already?
 
That's a great question for which I don't have a definitive answer. If I was you I'd be running up a test rig for this. You can run PVE nested. I would imagine you have a cluster but experiments can be run on a single box. If you need to test across the cluster, you'll need more networking.

Create one or more bridges with no physical interfaces - they will become test LANs. Magic up a IP subnet plan for your test.

Create a VM and install a router eg pfSense or whatever you are familiar with. It will use your real LAN as its WAN and be a router for for the test LANs - it should NAT the test LANs to its WAN gateway which is your real LAN gateway. You could do all of this with iptables/nftables rules on your Proxmox host itself (its a bog standard Debian Linux box) but I don't recommend that!

(research the requirements for running nested virtualisation, ie PVE within PVE - make sure your gear can do it)

Create another VM and install PVE into it. It will have its management interface on one of the test LAN bridges. Make sure it can reach the internet. You can create network bridges on it too but bear in mind that VMs on a nested host will need yet another router if they need to get to the internet. It is turtles all the way down!

If you get that lot working, you can answer your own questions.
 
Hello everyone,

I have a question about the SDN stack in Proxmox. Currently, traffic in the EVPN/VXLAN networks breaks out via the host interface that has the default route. Is there an officially supported way to change or define which interface is used without manually editing route maps in the shell?

I’m sure I can somehow rebuild this by creating VRFs and redirecting the traffic via the routing tables accordingly. However, I think this approach is not officially supported and might also not be persistent across updates.

Has anyone got experience with this, or has implemented something similar already?
do you have an example of what you need to do with manual routes to be sure to understand what you need?

on the underlay, evpn/vxlan are using peers adress list to establish vxlan tunnel, and the vxlan tunnels are working in default vrf only.

in the overlay, in evpn, if you define an exit-node, the traffic is forward between the evpn zone vrf to the default vrf of the exit node, then follow the routes of the exit-node in the default vrf. (can we the default route, static routes, or bgp learned routes)
 
I was thinking abaou PBR (policy-based routing)—i.e., forcing SDN traffic out through a different NIC.


Right now, in my cluster the management VLAN/interface has the default route, which means the SDN traffic would break out into my management VLAN—exactly where I don’t want it. I want to steer the SDN traffic explicitly to a specific network card / VLAN. That’s my actual goal.


Alternatively, I’d have to rebuild everything—i.e., move the gateway to a different network card.

What do you think ? Is there another way ?
 
I was thinking abaou PBR (policy-based routing)—i.e., forcing SDN traffic out through a different NIC.


Right now, in my cluster the management VLAN/interface has the default route, which means the SDN traffic would break out into my management VLAN—exactly where I don’t want it. I want to steer the SDN traffic explicitly to a specific network card / VLAN. That’s my actual goal.


Alternatively, I’d have to rebuild everything—i.e., move the gateway to a different network card.

What do you think ? Is there another way ?
if you have talking about the vxlan tunnels themself or the bgp peers, they are simply using the route to reach to remote peers ips.
so you can make simple routes on your host if needed.

or do you want PBR specifically for the vxlan udp port on a different nic ????
 
if you have talking about the vxlan tunnels themself or the bgp peers, they are simply using the route to reach to remote peers ips.
so you can make simple routes on your host if needed.

or do you want PBR specifically for the vxlan udp port on a different nic ????
Yes correct... I configured my SDN underlay network via ‘Fabrics’ using OSPF, and everything works perfectly. The traffic runs exactly over my dedicated VLAN into my switch fabric.


But my problem now is: when I attach a guest VM (with SNAT or without SNAT — doesn’t matter) to a zone, the traffic goes out via the default gateway of the respective Proxmox node. In my case the proxmox nodes are acting as anycast gateway for my guests. And exactly the guest traffic to internal network is that, what i want to redirect.

If I do a traceroute out of my guest, now traffic flow is:

Guest ---> Anycast Gateway ---> Breakout via default Gateway of the node to my internal network. My goal ist:

Guest --> Anycast Gateway ---> USE the NIC which is already used for VXLAN tunnels


Towards the switch fabric I have a clean ECMP path, so the traffic is allowed to exit on any node — meaning I don’t have a dedicated exit node. That would mean if I manage to steer the traffic using PBR, I’d probably have to configure it manually on every node.”

little bit background:

The main reason is that my management NIC which has the default GW is only for Management, NTP, Syslog,etc. using a 10G MLAG to switch Fabric. Management is a complete seperated VRF ressource in our network without access to the internet.

All other VLANS like (Ceph,Ceph Cluster,HA,GuestVlans,etc) use an 2x50G MLAG to our Switch fabric
 
Last edited: