Proxmox SDN VXLAN – Internal communication works but no external access from client

Ismo

New Member
Apr 7, 2026
1
0
1
Hello everyone,

I am working on a Proxmox lab using SDN (VXLAN) with a 3-node cluster, and I am facing an issue regarding external access to the SDN network.

Setup:
  • 3 Proxmox nodes (cluster)
  • VXLAN SDN network: 10.200.0.0/24
  • VMs inside the SDN can communicate with each other across nodes (ping works correctly)
Additional configuration:

Initially, there was no internet access in my lab, so I created a simple SDN network (VnetOut) with a gateway.

Now:
  • VMs have internet access via VnetOut
  • VXLAN SDN still works internally (VM ↔ VM)
Current issue:
I cannot access the VXLAN SDN network (10.200.x.x) from my external PC:

  • No ping
  • No RDP
  • No connectivity at all

Physical setup:
  • Unmanaged (offline) switch
  • Connected to all nodes
  • Also connected to my PC
  • Currently used with VLAN 70 (camera network

What I tried:
  • Adding additional NICs to VMs
  • Using a bridge (vmbr2) connected to the physical interface
  • Assigning an IP in the same subnet (10.200.x.x) on my PC
  • Trying VLAN tagging on SDN (not allowed – error: vm vlans are not allowed on vnet)
Goal:
I want my external client (PC) to connect to a management server VM using the SDN VXLAN network (10.200.x.x).

Constraints:
  • No managed switch (only unmanaged switch available)
  • Limited physical interfaces
  • Prefer not to use NAT or a router VM
  • Want to keep proper separation between SDN, camera network, and management access
Question:
Is it possible to expose or extend a Proxmox SDN VXLAN network to a physical network so that an external client can access it directly

Or is the correct approach to:

use an additional NIC (separate VLAN) for client access,
  • and keep SDN strictly internal?

Any guidance or best practices would be greatly appreciated.

Thank you!
 
Is it possible to expose or extend a Proxmox SDN VXLAN network to a physical network so that an external client can access it directly
You either need to configure a VXLAN IF on that external device as well and add it to the peer list in SDN - or route between the physical network and the VXLAN network.
 
  • Like
Reactions: news
The existing reply is correct, but I think what's missing is a concrete example of how to actually make it work.
Your VXLAN network itself is already working correctly (VM ↔ VM across nodes), so the missing part is just how to connect your external PC to it.



A simple way to test this is to add a gateway IP inside the VXLAN subnet on one node.
For example, on one of your Proxmox nodes, assign an IP address on the VNet interface:
Bash:
ip addr add 10.200.0.254/24 dev <your-vnet-interface>
This node will now act as the entry point / gateway for the VXLAN subnet.
It becomes the node that routes traffic between your physical LAN and the VXLAN subnet.

Then, on your external PC, do NOT try to put it directly into 10.200.x.x.
Instead, make sure your PC has an IP address in the same subnet as vmbr0.
For example, if vmbr0 is using 192.168.70.0/24, then your PC should also use an address in 192.168.70.0/24.

In other words, your PC should stay on the physical LAN side, not inside the VXLAN subnet.
The Proxmox node becomes the path between those two networks.

Then add a static route on your PC/router so that 10.200.0.0/24 is sent to the chosen Proxmox node on vmbr0.
(Exact command depends on whether your PC is Windows, Linux, or macOS.)
Bash:
route add 10.200.0.0/24 <IP of the chosen Proxmox node on vmbr0>
So from your PC's perspective:
- normal LAN traffic goes as usual
- traffic to 10.200.0.0/24 is sent to that Proxmox node


At that point, traffic flow becomes:
PC → Proxmox node (vmbr0) → VNet interface → VM

And replies come back the same way.
This should already give you basic connectivity (ping / RDP).


The next practical issue is persistence across reboot.
One simple way is to restore that gateway IP automatically when the VNet interface comes up.
For example:
/etc/network/if-up.d/mslsetup-vxlan-gw
Bash:
#!/bin/bash
case "${IFACE:-}" in
    <your-vnet-interface>)
        ip addr replace 10.200.0.254/24 dev <your-vnet-interface>
        ;;
    *)
        exit 0
        ;;
esac

/etc/network/if-down.d/mslsetup-vxlan-gw
Bash:
#!/bin/bash
case "${IFACE:-}" in
    <your-vnet-interface>)
        ip addr del 10.200.0.254/24 dev <your-vnet-interface> 2>/dev/null
        ;;
    *)
        exit 0
        ;;
esac

Of course, once you do this, you now depend on that single node.
If that node goes down, your external access to 10.200.0.0/24 is gone.


That's where the real design question starts:
- Which node should own the VXLAN gateway?
- How do you make it highly available in a 3-node cluster?

A practical approach is to use a floating gateway.
For example, tools like keepalived can elect one active node and move the gateway IP (10.200.0.254) between nodes automatically.
keepalived can also run an external script when a node becomes active, so you can assign the gateway IP dynamically only on the active node.

So for this kind of access pattern, you may not need a full EVPN-based fabric or an enterprise-style BGP/ECMP design just to let an external PC reach the VXLAN subnet.


So in summary:
- VXLAN itself is working fine
- you just need a node that acts as a gateway / entry point
- your PC must route traffic to that node
- and in a multi-node setup, you'll likely want failover for that gateway

At that point, this becomes less about “VXLAN configuration” and more about gateway placement and HA design.
This is the kind of setup I ended up automating in MSL Setup, because once you include gateway placement, persistence, and failover, it stops being a simple VXLAN question.

I drew a diagram for this kind of setup here, in case it helps visualize the idea:
https://github.com/zelogx/msl-setup/blob/main/docs/assets/zelogx-MSL-Setup-cluster2.svg
 

Attachments

  • zelogx-MSL-Setup-cluster2.jpg
    zelogx-MSL-Setup-cluster2.jpg
    143.6 KB · Views: 10
Last edited:
  • Like
Reactions: news