[SOLVED] Inter node SDN networking using EVPN-VXLAN

markc

Member
Sep 12, 2020
47
7
13
69
Gold Coast, Australia
spiderweb.com.au
I have a small 3 node cluster with vmbr0 allocated 192.168.1.0/24 host node IPs. From that LAN network I can reach all internal VM/CTs on each of the 3 host nodes. Fine. Now I have 2 OpenWrt CTs on two different nodes with a pair of Debian CTs also on each of those two nodes. I added a "blank" vmbr1 bridge to each host node where OpenWrt uses vmbr0 for each soft router and gets a 192.168.1.* IP for its wan and vmbr1 for lan (either 192.168.2.1/24 or 192.168.3.1/24). On each respective Debian CT I can ping 1.1.1.1, 192.168.1.* and also the other 192.168.2(or3).* Debian guest CTs "behind" each respective OpenWrt router. All good and working as expected.

Now, using SDN magic, how do I make it so that the Debian CTs behind each of the OpenWrt routers can see the other private network guests behind the other router on the other host node WITHOUT using VPN/wireguard or static routing?
 
if yours vms are in the same subnet on each host, you can simply use a vxlan zone. It's create tunnels between hosts (like a fullmesh vpn/wireguard).
vxlan zone is only for flat l2 network.

if your vms are in differents subnets, you can use the evpn zones. (it's vxlan + integrating routing with an frr daemon deployed on proxmox hosts). where each proxmox host have an anycast gateway for your vms.
(no need to use openwrt or other router here)
 
  • Like
Reactions: markc and _gabriel
if your vms are in the same subnet on each host, you can simply use a vxlan zone. It's create tunnels between hosts (like a fullmesh vpn/wireguard).
vxlan zone is only for flat l2 network.
Much appreciate your help. I got this to work between 2x CTs on each of 3 nodes. Excellent. But it doesn't seem possible to access anything outside of the single defined vxlan network. It seems that the next step is needed for that.
if your vms are in differents subnets, you can use the evpn zones. (it's vxlan + integrating routing with an frr daemon deployed on proxmox hosts). where each proxmox host have an anycast gateway for your vms.
Okay so I now have two evpn vnets set up with two /24 networks and I can ping any CT in the defined network ranges on any of the 3 nodes (really cool!) and also the immediate parent node IP where the CT is hosted, but nothing else. The all too brief "EVPN Setup Example" finishes with...

You need to add reverse routes for the 10.0.1.0/24 and 10.0.2.0/24networks to node1 and node2 on your external gateway, so that the public network can reply back.

Can you spare a hint as to where and how best to do this part please?
 
Much appreciate your help. I got this to work between 2x CTs on each of 3 nodes. Excellent. But it doesn't seem possible to access anything outside of the single defined vxlan network. It seems that the next step is needed for that.

Okay so I now have two evpn vnets set up with two /24 networks and I can ping any CT in the defined network ranges on any of the 3 nodes (really cool!) and also the immediate parent node IP where the CT is hosted, but nothing else. The all too brief "EVPN Setup Example" finishes with...

You need to add reverse routes for the 10.0.1.0/24 and 10.0.2.0/24networks to node1 and node2 on your external gateway, so that the public network can reply back.

Can you spare a hint as to where and how best to do this part please?

ok , so for outside access, you need to define an exit-node. (1 of your proxmox host).
The exit-node is forwarding traffic from the evpn network to the real network (through the exit-node host default gw).

in the reverse direction, you need to add routes to your evpn network. This can be done statically or through bgp if you have bgp router in your network.

Here a example static:

external router: 10.0.0.1
proxmox node1 (exit node) : 10.0.0.10
proxmox node2 : 10.0.0.11

evpn subnet: 192.168.0.0/24 (with a vm 192.168.0.10 on node2 and anycast gateway 192.168.0.1)


from evpn subnet 192.168.0.0/24 to internet
-----------------------------------------------------------------
vm(192.168.0.10)---(192.168.0.1)--->node2-------(0.0.0.0.0)-->node1-----10.0.0.10---------->external router-------> 8.8.8.8

from internet to 192.168.0.10
-------------------------------------------

8.8.8.8------------->external router(10.0.0.1)-------HERE YOU NEED A ROUTE-------------->10.0.0.10 (node1)------------node2------->vm (192.168.0.10)


THE ROUTE: route add 192.168.0.0/24 gw 10.0.0.10 on your external router.
 
  • Like
Reactions: markc
Thanks yet again. I removed my initial tests and started from scratch again following your guide, with a few other google hints, and I have something that is working as I expect BUT only for ICMP. I can ping east-west and north-south (well, north at least) successfully but dig (UDP) and curl (TCP) outside the vnet does not seem to return, but pings everywhere do work. I've removed all firewall settings (datacenter, node, and CT) and wondering if you can think of something really obvious and simple that I've missed? So close! I set the MTU on the exit node vmbr0 to 1450 just in case, no change.
Code:
~ cat /etc/pve/sdn/*
evpn: evpnctl
        asn 65000
        peers 192.168.1.21, 192.168.1.23, 192.168.1.24

subnet: evpn1-192.168.1.0-24
        vnet evnet1
        gateway 192.168.1.1
        snat 1

vnet: evnet1
        zone evpn1
        tag 20000

evpn: evpn1
        controller evpnctl
        vrf-vxlan 10000
        advertise-subnets 1
        exitnodes pve3,pve4,pve1
        exitnodes-local-routing 1
        exitnodes-primary pve1
        ipam pve
        mac BC:24:11:BE:07:61
The routing table on the primary exit node...
Code:
~ ip r
default via 192.168.1.1 dev vmbr0 proto kernel onlink
10.1.1.0/24 nhid 6 dev vnet1 proto bgp metric 20  
10.1.1.3 nhid 22 via 192.168.1.23 dev vrfbr_evpn1 proto bgp metric 20 onlink  
192.168.10.0/24 dev vmbr1 proto kernel scope link src 192.168.1.21
 
Last edited:
Thanks to spirit, I managed to get this EVPN-VXLAN system to work so here is a how-I-did-it.

Assuming a router with a LAN IP of 192.168.1.1 and 3x Proxmox VE host nodes as pve1 (192.168.1.21), pve2 (192.168.1.22) and pve3 (192.168.1.23) including a number of VM/CTs in an internal 10.1.1.0/24 network all being able to see each other across the PVE host nodes including all upstream 192.168.1.0/24 LAN hosts and the general internet.

1. Goto Datacenter > SDN > Options and add a 'evpn' Controller with ID: evpnctl, ASN #: 65000 and Peers: 192.168.1.21, 192.168.1.22, 192.168.1.23

2. Goto Zones and Add a EVPN item with ID: evpn1, Controller: select evpnctl, VRF-VXLAN Tag: 10000, Exit Nodes: select pve1 pve2 pve3, Primary Exit Node: select pve1, MTU: 1450

3. Goto VNets and Create one with Name: vnet1, Zone: evpn1, Tag: 20000

4. Over in Subnets to the right, create a new one with Subnet: 10.1.1.0/24, Gateway: 10.1.1.1, SNAT ticked

5. That's mostly it so go back to SDN and click Apply

6. Create a Debian LXC container and set Network > Edit net0 to Bridge: select vnet1, IP Address: 10.1.1.2/24, Gateway: 10.1.1.1.

7. Shutdown and clone 2 more CTs and move them to the other two host nodes with IPs of 10.1.1.3/24 and 10.1.1.4/24 IPs, plus Gateway: 10.1.1.1 for both (I called my test CTs ctd1, ctd2 and ctd3 with 200Mb ram and 2Gb storage for super fast cloning and backup)

8. Goto your 192.168.1.1 router and create a static route with Network/Host: 10.1.1.0, Netmask: 255.255.255.0, Gateway: 192.168.1.21, Metric: 1, Interface: LAN

That's it. Maybe reboot the primary pve1 host node (that's when it started working for me, maybe coincidence) and please reply here with any success or corrections. Obviously change the IPs and hostnames to suit your cluster layout. Here are some commands I found useful...

Code:
iptables -t nat -L -vnx
tcpdump -nvi vmbr0 host 1.1.1.1
cat /etc/network/interfaces
cat /etc/frr/frr.conf
cat /etc/pve/sdn/*
vtysh -c "show bgp summary"
ip nei (and of course ip a, ip r)

TODO: dynamic BPG exit nodes to get around any single primary exit node downtime and DHCP allocation of 10.1.1.0/24 IPs for VM/CTs using Bridge: vnet1 so if anyone could add a step by step guide how-to-do-that within this network scenario then it would be hugely appreciated.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!