[SOLVED] Inter node SDN networking using EVPN-VXLAN

markc

Active Member
Sep 12, 2020
59
13
28
70
Gold Coast, Australia
spiderweb.com.au
I have a small 3 node cluster with vmbr0 allocated 192.168.1.0/24 host node IPs. From that LAN network I can reach all internal VM/CTs on each of the 3 host nodes. Fine. Now I have 2 OpenWrt CTs on two different nodes with a pair of Debian CTs also on each of those two nodes. I added a "blank" vmbr1 bridge to each host node where OpenWrt uses vmbr0 for each soft router and gets a 192.168.1.* IP for its wan and vmbr1 for lan (either 192.168.2.1/24 or 192.168.3.1/24). On each respective Debian CT I can ping 1.1.1.1, 192.168.1.* and also the other 192.168.2(or3).* Debian guest CTs "behind" each respective OpenWrt router. All good and working as expected.

Now, using SDN magic, how do I make it so that the Debian CTs behind each of the OpenWrt routers can see the other private network guests behind the other router on the other host node WITHOUT using VPN/wireguard or static routing?
 
if yours vms are in the same subnet on each host, you can simply use a vxlan zone. It's create tunnels between hosts (like a fullmesh vpn/wireguard).
vxlan zone is only for flat l2 network.

if your vms are in differents subnets, you can use the evpn zones. (it's vxlan + integrating routing with an frr daemon deployed on proxmox hosts). where each proxmox host have an anycast gateway for your vms.
(no need to use openwrt or other router here)
 
  • Like
Reactions: markc and _gabriel
if your vms are in the same subnet on each host, you can simply use a vxlan zone. It's create tunnels between hosts (like a fullmesh vpn/wireguard).
vxlan zone is only for flat l2 network.
Much appreciate your help. I got this to work between 2x CTs on each of 3 nodes. Excellent. But it doesn't seem possible to access anything outside of the single defined vxlan network. It seems that the next step is needed for that.
if your vms are in differents subnets, you can use the evpn zones. (it's vxlan + integrating routing with an frr daemon deployed on proxmox hosts). where each proxmox host have an anycast gateway for your vms.
Okay so I now have two evpn vnets set up with two /24 networks and I can ping any CT in the defined network ranges on any of the 3 nodes (really cool!) and also the immediate parent node IP where the CT is hosted, but nothing else. The all too brief "EVPN Setup Example" finishes with...

You need to add reverse routes for the 10.0.1.0/24 and 10.0.2.0/24networks to node1 and node2 on your external gateway, so that the public network can reply back.

Can you spare a hint as to where and how best to do this part please?
 
Much appreciate your help. I got this to work between 2x CTs on each of 3 nodes. Excellent. But it doesn't seem possible to access anything outside of the single defined vxlan network. It seems that the next step is needed for that.

Okay so I now have two evpn vnets set up with two /24 networks and I can ping any CT in the defined network ranges on any of the 3 nodes (really cool!) and also the immediate parent node IP where the CT is hosted, but nothing else. The all too brief "EVPN Setup Example" finishes with...

You need to add reverse routes for the 10.0.1.0/24 and 10.0.2.0/24networks to node1 and node2 on your external gateway, so that the public network can reply back.

Can you spare a hint as to where and how best to do this part please?

ok , so for outside access, you need to define an exit-node. (1 of your proxmox host).
The exit-node is forwarding traffic from the evpn network to the real network (through the exit-node host default gw).

in the reverse direction, you need to add routes to your evpn network. This can be done statically or through bgp if you have bgp router in your network.

Here a example static:

external router: 10.0.0.1
proxmox node1 (exit node) : 10.0.0.10
proxmox node2 : 10.0.0.11

evpn subnet: 192.168.0.0/24 (with a vm 192.168.0.10 on node2 and anycast gateway 192.168.0.1)


from evpn subnet 192.168.0.0/24 to internet
-----------------------------------------------------------------
vm(192.168.0.10)---(192.168.0.1)--->node2-------(0.0.0.0.0)-->node1-----10.0.0.10---------->external router-------> 8.8.8.8

from internet to 192.168.0.10
-------------------------------------------

8.8.8.8------------->external router(10.0.0.1)-------HERE YOU NEED A ROUTE-------------->10.0.0.10 (node1)------------node2------->vm (192.168.0.10)


THE ROUTE: route add 192.168.0.0/24 gw 10.0.0.10 on your external router.
 
  • Like
Reactions: niel and markc
Thanks yet again. I removed my initial tests and started from scratch again following your guide, with a few other google hints, and I have something that is working as I expect BUT only for ICMP. I can ping east-west and north-south (well, north at least) successfully but dig (UDP) and curl (TCP) outside the vnet does not seem to return, but pings everywhere do work. I've removed all firewall settings (datacenter, node, and CT) and wondering if you can think of something really obvious and simple that I've missed? So close! I set the MTU on the exit node vmbr0 to 1450 just in case, no change.
Code:
~ cat /etc/pve/sdn/*
evpn: evpnctl
        asn 65000
        peers 192.168.1.21, 192.168.1.23, 192.168.1.24

subnet: evpn1-192.168.1.0-24
        vnet evnet1
        gateway 192.168.1.1
        snat 1

vnet: evnet1
        zone evpn1
        tag 20000

evpn: evpn1
        controller evpnctl
        vrf-vxlan 10000
        advertise-subnets 1
        exitnodes pve3,pve4,pve1
        exitnodes-local-routing 1
        exitnodes-primary pve1
        ipam pve
        mac BC:24:11:BE:07:61
The routing table on the primary exit node...
Code:
~ ip r
default via 192.168.1.1 dev vmbr0 proto kernel onlink
10.1.1.0/24 nhid 6 dev vnet1 proto bgp metric 20  
10.1.1.3 nhid 22 via 192.168.1.23 dev vrfbr_evpn1 proto bgp metric 20 onlink  
192.168.10.0/24 dev vmbr1 proto kernel scope link src 192.168.1.21
 
Last edited:
Thanks to spirit, I managed to get this EVPN-VXLAN system to work so here is a how-I-did-it.

Assuming a router with a LAN IP of 192.168.1.1 and 3x Proxmox VE host nodes as pve1 (192.168.1.21), pve2 (192.168.1.22) and pve3 (192.168.1.23) including a number of VM/CTs in an internal 10.1.1.0/24 network all being able to see each other across the PVE host nodes including all upstream 192.168.1.0/24 LAN hosts and the general internet.

1. Goto Datacenter > SDN > Options and add a 'evpn' Controller with ID: evpnctl, ASN #: 65000 and Peers: 192.168.1.21, 192.168.1.22, 192.168.1.23

2. Goto Zones and Add a EVPN item with ID: evpn1, Controller: select evpnctl, VRF-VXLAN Tag: 10000, Exit Nodes: select pve1 pve2 pve3, Primary Exit Node: select pve1, MTU: 1450

3. Goto VNets and Create one with Name: vnet1, Zone: evpn1, Tag: 20000

4. Over in Subnets to the right, create a new one with Subnet: 10.1.1.0/24, Gateway: 10.1.1.1, SNAT ticked

5. That's mostly it so go back to SDN and click Apply

6. Create a Debian LXC container and set Network > Edit net0 to Bridge: select vnet1, IP Address: 10.1.1.2/24, Gateway: 10.1.1.1.

7. Shutdown and clone 2 more CTs and move them to the other two host nodes with IPs of 10.1.1.3/24 and 10.1.1.4/24 IPs, plus Gateway: 10.1.1.1 for both (I called my test CTs ctd1, ctd2 and ctd3 with 200Mb ram and 2Gb storage for super fast cloning and backup)

8. Goto your 192.168.1.1 router and create a static route with Network/Host: 10.1.1.0, Netmask: 255.255.255.0, Gateway: 192.168.1.21, Metric: 1, Interface: LAN

That's it. Maybe reboot the primary pve1 host node (that's when it started working for me, maybe coincidence) and please reply here with any success or corrections. Obviously change the IPs and hostnames to suit your cluster layout. Here are some commands I found useful...

Code:
iptables -t nat -L -vnx
tcpdump -nvi vmbr0 host 1.1.1.1
cat /etc/network/interfaces
cat /etc/frr/frr.conf
cat /etc/pve/sdn/*
vtysh -c "show bgp summary"
ip nei (and of course ip a, ip r)

TODO: dynamic BPG exit nodes to get around any single primary exit node downtime and DHCP allocation of 10.1.1.0/24 IPs for VM/CTs using Bridge: vnet1 so if anyone could add a step by step guide how-to-do-that within this network scenario then it would be hugely appreciated.
 
Last edited:
Thanks to spirit, I managed to get this EVPN-VXLAN system to work so here is a how-I-did-it.

Assuming a router with a LAN IP of 192.168.1.1 and 3x Proxmox VE host nodes as pve1 (192.168.1.21), pve2 (192.168.1.22) and pve3 (192.168.1.23) including a number of VM/CTs in an internal 10.1.1.0/24 network all being able to see each other across the PVE host nodes including all upstream 192.168.1.0/24 LAN hosts and the general internet.

1. Goto Datacenter > SDN > Options and add a 'evpn' Controller with ID: evpnctl, ASN #: 65000 and Peers: 192.168.1.21, 192.168.1.22, 192.168.1.23

2. Goto Zones and Add a EVPN item with ID: evpn1, Controller: select evpnctl, VRF-VXLAN Tag: 10000, Exit Nodes: select pve1 pve2 pve3, Primary Exit Node: select pve1, MTU: 1450

3. Goto VNets and Create one with Name: vnet1, Zone: evpn1, Tag: 20000

4. Over in Subnets to the right, create a new one with Subnet: 10.1.1.0/24, Gateway: 10.1.1.1, SNAT ticked

5. That's mostly it so go back to SDN and click Apply

6. Create a Debian LXC container and set Network > Edit net0 to Bridge: select vnet1, IP Address: 10.1.1.2/24, Gateway: 10.1.1.1.

7. Shutdown and clone 2 more CTs and move them to the other two host nodes with IPs of 10.1.1.3/24 and 10.1.1.4/24 IPs, plus Gateway: 10.1.1.1 for both (I called my test CTs ctd1, ctd2 and ctd3 with 200Mb ram and 2Gb storage for super fast cloning and backup)

8. Goto your 192.168.1.1 router and create a static route with Network/Host: 10.1.1.0, Netmask: 255.255.255.0, Gateway: 192.168.1.21, Metric: 1, Interface: LAN

That's it. Maybe reboot the primary pve1 host node (that's when it started working for me, maybe coincidence) and please reply here with any success or corrections. Obviously change the IPs and hostnames to suit your cluster layout. Here are some commands I found useful...

Code:
iptables -t nat -L -vnx
tcpdump -nvi vmbr0 host 1.1.1.1
cat /etc/network/interfaces
cat /etc/frr/frr.conf
cat /etc/pve/sdn/*
vtysh -c "show bgp summary"
ip nei (and of course ip a, ip r)

TODO: dynamic BPG exit nodes to get around any single primary exit node downtime and DHCP allocation of 10.1.1.0/24 IPs for VM/CTs using Bridge: vnet1 so if anyone could add a step by step guide how-to-do-that within this network scenario then it would be hugely appreciated.

Is there any special reason in choosing 10000 as the VRF-VXLAN tag? same for the vnet tag? or are those totally random?
 
Last edited:
Thanks to spirit, I managed to get this EVPN-VXLAN system to work so here is a how-I-did-it.

Assuming a router with a LAN IP of 192.168.1.1 and 3x Proxmox VE host nodes as pve1 (192.168.1.21), pve2 (192.168.1.22) and pve3 (192.168.1.23) including a number of VM/CTs in an internal 10.1.1.0/24 network all being able to see each other across the PVE host nodes including all upstream 192.168.1.0/24 LAN hosts and the general internet.

1. Goto Datacenter > SDN > Options and add a 'evpn' Controller with ID: evpnctl, ASN #: 65000 and Peers: 192.168.1.21, 192.168.1.22, 192.168.1.23

2. Goto Zones and Add a EVPN item with ID: evpn1, Controller: select evpnctl, VRF-VXLAN Tag: 10000, Exit Nodes: select pve1 pve2 pve3, Primary Exit Node: select pve1, MTU: 1450

3. Goto VNets and Create one with Name: vnet1, Zone: evpn1, Tag: 20000

4. Over in Subnets to the right, create a new one with Subnet: 10.1.1.0/24, Gateway: 10.1.1.1, SNAT ticked

5. That's mostly it so go back to SDN and click Apply

6. Create a Debian LXC container and set Network > Edit net0 to Bridge: select vnet1, IP Address: 10.1.1.2/24, Gateway: 10.1.1.1.

7. Shutdown and clone 2 more CTs and move them to the other two host nodes with IPs of 10.1.1.3/24 and 10.1.1.4/24 IPs, plus Gateway: 10.1.1.1 for both (I called my test CTs ctd1, ctd2 and ctd3 with 200Mb ram and 2Gb storage for super fast cloning and backup)

8. Goto your 192.168.1.1 router and create a static route with Network/Host: 10.1.1.0, Netmask: 255.255.255.0, Gateway: 192.168.1.21, Metric: 1, Interface: LAN

That's it. Maybe reboot the primary pve1 host node (that's when it started working for me, maybe coincidence) and please reply here with any success or corrections. Obviously change the IPs and hostnames to suit your cluster layout. Here are some commands I found useful...

Code:
iptables -t nat -L -vnx
tcpdump -nvi vmbr0 host 1.1.1.1
cat /etc/network/interfaces
cat /etc/frr/frr.conf
cat /etc/pve/sdn/*
vtysh -c "show bgp summary"
ip nei (and of course ip a, ip r)

TODO: dynamic BPG exit nodes to get around any single primary exit node downtime and DHCP allocation of 10.1.1.0/24 IPs for VM/CTs using Bridge: vnet1 so if anyone could add a step by step guide how-to-do-that within this network scenario then it would be hugely appreciated.

btw thank you, this setup works for me. except that vms/cts on different vnets can still ping each other. is there a way to restrict this behaviour without using a firewall?
 
  • Like
Reactions: markc
Glad to hear that how-I-did-it worked for someone else. I'm still trying to figure out BGP exit nodes and DHCP assignments, so I haven't even tried multiple vnets yet, so if I come across some way to confine traffic in this EVPN-VXLAN context I'll report back.
 
Thanks to spirit, I managed to get this EVPN-VXLAN system to work so here is a how-I-did-it.

Assuming a router with a LAN IP of 192.168.1.1 and 3x Proxmox VE host nodes as pve1 (192.168.1.21), pve2 (192.168.1.22) and pve3 (192.168.1.23) including a number of VM/CTs in an internal 10.1.1.0/24 network all being able to see each other across the PVE host nodes including all upstream 192.168.1.0/24 LAN hosts and the general internet.

1. Goto Datacenter > SDN > Options and add a 'evpn' Controller with ID: evpnctl, ASN #: 65000 and Peers: 192.168.1.21, 192.168.1.22, 192.168.1.23

2. Goto Zones and Add a EVPN item with ID: evpn1, Controller: select evpnctl, VRF-VXLAN Tag: 10000, Exit Nodes: select pve1 pve2 pve3, Primary Exit Node: select pve1, MTU: 1450

3. Goto VNets and Create one with Name: vnet1, Zone: evpn1, Tag: 20000

4. Over in Subnets to the right, create a new one with Subnet: 10.1.1.0/24, Gateway: 10.1.1.1, SNAT ticked

5. That's mostly it so go back to SDN and click Apply

6. Create a Debian LXC container and set Network > Edit net0 to Bridge: select vnet1, IP Address: 10.1.1.2/24, Gateway: 10.1.1.1.

7. Shutdown and clone 2 more CTs and move them to the other two host nodes with IPs of 10.1.1.3/24 and 10.1.1.4/24 IPs, plus Gateway: 10.1.1.1 for both (I called my test CTs ctd1, ctd2 and ctd3 with 200Mb ram and 2Gb storage for super fast cloning and backup)

8. Goto your 192.168.1.1 router and create a static route with Network/Host: 10.1.1.0, Netmask: 255.255.255.0, Gateway: 192.168.1.21, Metric: 1, Interface: LAN

That's it. Maybe reboot the primary pve1 host node (that's when it started working for me, maybe coincidence) and please reply here with any success or corrections. Obviously change the IPs and hostnames to suit your cluster layout. Here are some commands I found useful...

Code:
iptables -t nat -L -vnx
tcpdump -nvi vmbr0 host 1.1.1.1
cat /etc/network/interfaces
cat /etc/frr/frr.conf
cat /etc/pve/sdn/*
vtysh -c "show bgp summary"
ip nei (and of course ip a, ip r)

TODO: dynamic BPG exit nodes to get around any single primary exit node downtime and DHCP allocation of 10.1.1.0/24 IPs for VM/CTs using Bridge: vnet1 so if anyone could add a step by step guide how-to-do-that within this network scenario then it would be hugely appreciated.
Thank you very much, you helped me a lot, I was having the same problmen, with your HOWTO I was able to let my VMS talk..
 
  • Like
Reactions: markc
Glad to hear that how-I-did-it worked for someone else. I'm still trying to figure out BGP exit nodes and DHCP assignments, so I haven't even tried multiple vnets yet, so if I come across some way to confine traffic in this EVPN-VXLAN context I'll report back.
I found something weird in this setup. if I put CTs/VMs on VLAN TAG 2, the CT/VM can't reach anywhere. VLAN tags > 2 work just fine.
 
maybe do you have use vxlan 2 for vrf-vxlan in the evpn zone ? (They are no protection against this currently)

Nope, I use the tag 10000. Is there something I can use to debug or find out what's using VLAN TAG 2?
 
Glad to hear that how-I-did-it worked for someone else. I'm still trying to figure out BGP exit nodes and DHCP assignments, so I haven't even tried multiple vnets yet, so if I come across some way to confine traffic in this EVPN-VXLAN context I'll report back.
I did it similarly, with the difference that I also added three bgp controllers to communicate with my router (which also assigns each router a different asn for ebgp)...
Unfortunately, this disrupted the internal communication of the networks between the different hosts. I think the reason for this was that no EVPN but normal BGP sessions were established to the proxmox hosts.

Have you got any further with the BGP exit nodes? What about the network separation between different vnets? Any news?

I used these commands a lot for debugging, maybe someone else is looking for these...
Code:
vtysh -c 'show bgp sum'
vtysh -c 'show bgp l2vpn evpn sum'
vtysh -c 'show bgp l2vpn evpn'
vtysh -c 'show evpn vni'

EDIT:
Just adding one bgp controller on e.g. pve1 seams to break the whole setup (see Attached Files). At least now I get the right neighbors peering types (evpn, instead of regular bgp - before I seriosly had all three peers (vyos, pve2 and pve3 in peer-group BGP)
 

Attachments

Last edited:
Thanks to spirit, I managed to get this EVPN-VXLAN system to work so here is a how-I-did-it.

Assuming a router with a LAN IP of 192.168.1.1 and 3x Proxmox VE host nodes as pve1 (192.168.1.21), pve2 (192.168.1.22) and pve3 (192.168.1.23) including a number of VM/CTs in an internal 10.1.1.0/24 network all being able to see each other across the PVE host nodes including all upstream 192.168.1.0/24 LAN hosts and the general internet.

1. Goto Datacenter > SDN > Options and add a 'evpn' Controller with ID: evpnctl, ASN #: 65000 and Peers: 192.168.1.21, 192.168.1.22, 192.168.1.23

2. Goto Zones and Add a EVPN item with ID: evpn1, Controller: select evpnctl, VRF-VXLAN Tag: 10000, Exit Nodes: select pve1 pve2 pve3, Primary Exit Node: select pve1, MTU: 1450

3. Goto VNets and Create one with Name: vnet1, Zone: evpn1, Tag: 20000

4. Over in Subnets to the right, create a new one with Subnet: 10.1.1.0/24, Gateway: 10.1.1.1, SNAT ticked

5. That's mostly it so go back to SDN and click Apply

6. Create a Debian LXC container and set Network > Edit net0 to Bridge: select vnet1, IP Address: 10.1.1.2/24, Gateway: 10.1.1.1.

7. Shutdown and clone 2 more CTs and move them to the other two host nodes with IPs of 10.1.1.3/24 and 10.1.1.4/24 IPs, plus Gateway: 10.1.1.1 for both (I called my test CTs ctd1, ctd2 and ctd3 with 200Mb ram and 2Gb storage for super fast cloning and backup)

8. Goto your 192.168.1.1 router and create a static route with Network/Host: 10.1.1.0, Netmask: 255.255.255.0, Gateway: 192.168.1.21, Metric: 1, Interface: LAN

That's it. Maybe reboot the primary pve1 host node (that's when it started working for me, maybe coincidence) and please reply here with any success or corrections. Obviously change the IPs and hostnames to suit your cluster layout. Here are some commands I found useful...

Code:
iptables -t nat -L -vnx
tcpdump -nvi vmbr0 host 1.1.1.1
cat /etc/network/interfaces
cat /etc/frr/frr.conf
cat /etc/pve/sdn/*
vtysh -c "show bgp summary"
ip nei (and of course ip a, ip r)

TODO: dynamic BPG exit nodes to get around any single primary exit node downtime and DHCP allocation of 10.1.1.0/24 IPs for VM/CTs using Bridge: vnet1 so if anyone could add a step by step guide how-to-do-that within this network scenario then it would be hugely appreciated.
I have this exact setup running, and I have bought a FortiGate 60D firewall which I intend to use instead of the switch. It will be great if we can get the EVPN Controller + BGP setup steps. Thanks spirt and all you guys for the hard work and contibution.
 
ok , so for outside access, you need to define an exit-node. (1 of your proxmox host).
The exit-node is forwarding traffic from the evpn network to the real network (through the exit-node host default gw).

in the reverse direction, you need to add routes to your evpn network. This can be done statically or through bgp if you have bgp router in your network.

Here a example static:

external router: 10.0.0.1
proxmox node1 (exit node) : 10.0.0.10
proxmox node2 : 10.0.0.11

evpn subnet: 192.168.0.0/24 (with a vm 192.168.0.10 on node2 and anycast gateway 192.168.0.1)


from evpn subnet 192.168.0.0/24 to internet
-----------------------------------------------------------------
vm(192.168.0.10)---(192.168.0.1)--->node2-------(0.0.0.0.0)-->node1-----10.0.0.10---------->external router-------> 8.8.8.8

from internet to 192.168.0.10
-------------------------------------------

8.8.8.8------------->external router(10.0.0.1)-------HERE YOU NEED A ROUTE-------------->10.0.0.10 (node1)------------node2------->vm (192.168.0.10)


THE ROUTE: route add 192.168.0.0/24 gw 10.0.0.10 on your external router.
Thanks' spirit for your hard work and contribution.

Can you also provide the steps for BGP, I have FortiGate 60D in which I have disabled all firewall features and using it as an 8 port router.

I have created 3 BGP controllers for 3 nodes, do they need to be on the 3 different AS Numbers?

Do I include the other 2 nodes and the FortiGate 60 D as members?

What do I configure under the Fortigate 60D?

Kindly advice.

Thank you.
 
Thanks' spirit for your hard work and contribution.

Can you also provide the steps for BGP, I have FortiGate 60D in which I have disabled all firewall features and using it as an 8 port router.

I have created 3 BGP controllers for 3 nodes, do they need to be on the 3 different AS Numbers?

Do I include the other 2 nodes and the FortiGate 60 D as members?

What do I configure under the Fortigate 60D?

Kindly advice.

Thank you.
Technically, you need to have a route in your fortigate to the evpn subnets with the exit-nodes as gateway.
This route can be static or received through bgp (in this case, you need to define bgp controllers on proxmox on the exit-nodes with the fortigate as peer, to announce through bgp the evpn subnets)


you can use the same ASN for your bgp controllers (use the same asn than your fortigate). Only set the fortigates ips as peer.

for the fortigate config, I don't known how it's work, you need to define a bgp with proxmox exit-nodes as peer.


for example, if you have this setup:


vm--192.168.0.10--------->192.168.0.1 proxmox node (exit-node) 10.0.0.1 ----------------------->10.0.0.254fortigate



on your fortigate, you need to have a route like: " route add 192.168.0.0/24 gw 10.0.0.1"
 
Technically, you need to have a route in your fortigate to the evpn subnets with the exit-nodes as gateway.
This route can be static or received through bgp (in this case, you need to define bgp controllers on proxmox on the exit-nodes with the fortigate as peer, to announce through bgp the evpn subnets)


you can use the same ASN for your bgp controllers (use the same asn than your fortigate). Only set the fortigates ips as peer.

for the fortigate config, I don't known how it's work, you need to define a bgp with proxmox exit-nodes as peer.


for example, if you have this setup:


vm--192.168.0.10--------->192.168.0.1 proxmox node (exit-node) 10.0.0.1 ----------------------->10.0.0.254fortigate



on your fortigate, you need to have a route like: " route add 192.168.0.0/24 gw 10.0.0.1"
Thanks Spirit...

1) I have three nodes and 1 EVPN controller with ASN 65000 , So I create 3 BGP controller with ASN 65000
or
I use different ASN than the EVPN Controller like 65001??

2) In the BGP controller I only mention 1 neighbour, the FortiGate or I mention the other 2 nodes IP address also?

In Fortigate I mention all the three nodes.


3) As I will use BGP I will not need to create the static route, correct? I dont want to use ECMP.




4) If I live migrate VM's between the nodes, the Fortigate will know to which node the traffic needs to be routed??



Do I need to setup BGP route reflector as I can do that in Fortigate, if required.

Once BGP is setup I dont need to setup Primary Exit node , is that correct, I plan to have a 5 node cluster for my work and cant saturate the 1 gig link of 1 node , thats why I am going the BGP route.


I will try this on weekend as my lab network always breaks when I try to setup BGP.

Thank you so much :)
 
Last edited:
Thanks Spirit...

1) I have three nodes and 1 EVPN controller with ASN 65000 , So I create 3 BGP controller with ASN 65000
or
I use different ASN than the EVPN Controller like 65001??
you can use same ASN for both evpn && bgp

2) In the BGP controller I only mention 1 neighbour, the FortiGate or I mention the other 2 nodes IP address also?

In Fortigate I mention all the three nodes.
put all your exit-nodes

3) As I will use BGP I will not need to create the static route, correct? I dont want to use ECMP.
indeed you don't need static route if you use bgp.

4) If I live migrate VM's between the nodes, the Fortigate will know to which node the traffic needs to be routed??

The route is announced by all exit-nodes (even if the vm is not on this node, even if you define exit-node on each).

So, in all case, you'll have ecmp. If a packet in coming to a exit-node when the vm is not present, it'll be rerouted again inside the evpn.


The only way to have direct routing from your fortigate to the evpn, is to have evpn support inside your fortigate (Then the fortigate is the evpn exit-node)



Do I need to setup BGP route reflector as I can do that in Fortigate, if required.
with 3 nodes, it seem to be overkill. just keep fullmesh peering with all nodes ip as evpn peer.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!