EVPN SDN Multi-tenancy: Something similar to Inter-AS Option A?

chayter

New Member
Mar 13, 2024
2
0
1
Greetings all,

I am planning a sizable Proxmox EVN SDN deployment in my lab and I have a question regarding its implementation, particularly around multi-tenancy.

Some background about my lab before I get started:
  • Existing Cisco EVPN/VXLAN+BGP Fabric
  • OPNSense firewalls are used for Inter-tenant traffic filtering. These peer with the fabric border leaf's via BGP. (Realistically these can be any firewall that supports BGP, but the key is they are bare metal.)
  • Intra-tenant firewalling is done within Proxmox firewalls and traditional VLAN's.
What I would like to do is move from VLAN's to Proxmox EVPN and peer Proxmox Nodes with fabric while maintaining multi-tenancy up the the existing fabric edge.

The idea I had was doing something similar to Inter-AS Option A where each EVPN Tenant on Proxmox peers with its respective VRF via BGP on the leafs. This would keep the configuration domains between Proxmox and the existing network separate. I looked into using the BGP Controller for this, but it looks like it only supports one peering for the entire SDN deployment. Would it be possible with some modification of the FRR config to support per-vrf peering? I would imagine this would require me to configure sub interfaces on the hypervisors for each L3VNI? The idea would be to inject VM host-routes into the fabric to support VM mobility, allowing for a completely routed topology.

Code:
[Proxmox SDN - Blue EVPN Tenant] ---> EBGP ---> [Blue VRF L3VNI] ---> EBGP ---> Inter-Tenant Firewall
[Proxmox SDN - Red EVPN Tenant]  ---> EBGP ---> [Red VRF L3VNI]  ---> EBGP ---> Inter-Tenant Firewall
[Proxmox Management Interface]   ---> [Management VRF L3VNI]     ---> Management / NOC Firewalls

As I understand it, we can modify the /etc/frr/frr.conf.local however I am concerned with stability during updates / upgrades.

Am I on the right track here and is this a feature that may be considered in the future? Id be happy to contribute to testing and or documentation if I can get this working.

Thank you for bringing EVPN to Proxmox!

Chris
 
in the proxmox evpn sdn, each zone is a different vrf.
evpn peering need to be done in the default vrf.
extra bgp peering could be done in vrf, but it's not currently implemented. (we do it in default vrf, and leak routes from tenants evpn vrf).


about /etc/frr/frr.conf, you can create this file , and only add your new needed lines, it'll be parsed and merge with the generated frr.

for example, something like this should be enough:

Code:
bgp router .... vrf vrf_zone1
    neighbor x.x.x.x ...
    address-family l2vpn evpn
       advertise ipv4 unicast
    exit-address-family

Note that you need in proxmox host, an ip on a interface in this vrf, to be able to peer from the vrf

/etc/network/interfaces

Code:
auto eth0
iface eth0 inet static
     address ...
     vrf vrf_zone1

(I don't known if you don't want a dedicated phyiscal interface by vrf, if you could use a loopback or dummy interface, maybe vlan interface,...)
 
Last edited:
Hey Spirit,

That is what I was looking for! Essentially the same as current implementation, but without leaking to to the default VRF when exporting routes via BGP (on exit nodes). This maintains the multi-tenancy into the existing fabric.

An IP on a VLAN per VRF should be sufficient, though a lookback on each VRF gives me some ideas. I would need static routes on the Leaf's though to inject reachability within the fabric.

Going to spin this up in GNS3 later today and give it a test.

Do I still need to set up exit nodes for this to work, or should I avoid doing that? I assume that sets up the route leaking configurations, which I am looking to avoid?

Thanks again,

Chris
 
Last edited:
Hey Spirit,

That is what I was looking for! Essentially the same as current implementation, but without leaking to to the default VRF when exporting routes via BGP (on exit nodes). This maintains the multi-tenancy into the existing fabric.

An IP on a VLAN per VRF should be sufficient, though a lookback on each VRF gives me some ideas. I would need static routes on the Leaf's though to inject reachability within the fabric.

Going to spin this up in GNS3 later today and give it a test.

Do I still need to set up exit nodes for this to work, or should I avoid doing that? I assume that sets up the route leaking configurations, which I am looking to avoid?

Thanks again,

Chris
The exit-node is only a node announcing default 0.0.0.0 evpn type-5 route. (so every node is forwding outside traffic to exit-node in evpn, and the exit-node is routing again between evpn network and default vrf through classic bgp)

if you peers all nodes nodes directly in bgp from their vrf (announcing each vm /32 ip), you don't need an exit node.
 
in the proxmox evpn sdn, each zone is a different vrf.
evpn peering need to be done in the default vrf.
extra bgp peering could be done in vrf, but it's not currently implemented. (we do it in default vrf, and leak routes from tenants evpn vrf).


about /etc/frr/frr.conf, you can create this file , and only add your new needed lines, it'll be parsed and merge with the generated frr.

for example, something like this should be enough:

Code:
bgp router .... vrf vrf_zone1
    neighbor x.x.x.x ...
    address-family l2vpn evpn
       advertise ipv4 unicast
    exit-address-family

Note that you need in proxmox host, an ip on a interface in this vrf, to be able to peer from the vrf

/etc/network/interfaces

Code:
auto eth0
iface eth0 inet static
     address ...
     vrf vrf_zone1

(I don't known if you don't want a dedicated phyiscal interface by vrf, if you could use a loopback or dummy interface, maybe vlan interface,...)
Hi Spirit,
Does this work yet? Basically Im trying to build a AWS style VPC environment per tenant but need to break out into the real world. When I use the BGP Controller and connect to an upstream BGP speaker it munges all the tenant routes togather which I dont want.

Im happy to connect my 'Coke' tenant VYOS instance into the EVPN VRF for Coke but not sure how the routes can be shared between the two environments?

Static routes would probably be fine if that works in lieu of having full BGP reachability?

Any tips would be greatly appreicated.

Thanks

JM
 
Hi Spirit,
Does this work yet? Basically Im trying to build a AWS style VPC environment per tenant but need to break out into the real world. When I use the BGP Controller and connect to an upstream BGP speaker it munges all the tenant routes togather which I dont want.

Im happy to connect my 'Coke' tenant VYOS instance into the EVPN VRF for Coke but not sure how the routes can be shared between the two environments?

Static routes would probably be fine if that works in lieu of having full BGP reachability?

Any tips would be greatly appreicated.

Thanks

JM
if you have a vyos router, the best way is to peer with evpn with other proxmox nodes. (1 peer by vrf. 1 proxmox each zone is a vrf).

if your coke tenant is a layer2 behind vyos, vyos should announce their ip/mac inside the evpn zone.
 
if you have a vyos router, the best way is to peer with evpn with other proxmox nodes. (1 peer by vrf. 1 proxmox each zone is a vrf).

if your coke tenant is a layer2 behind vyos, vyos should announce their ip/mac inside the evpn zone.
Hi Spirit,

OK, this sounds like it probably wont scale to the level I want then assuming I want to build my multi-tenancy to thousands of EVPN based tenants. The only other option I can see would be use a EVPN per tenant but use layer 2 segments with my VYOS router having interfaces in each segment and axting as the default gateway but then there would be an impact to performance as traffic would have to hairpin through the vyos virtual router for each to west traffic.

Assuming each tenant has a transit network segment is there no way to specific a static route within the EVPN tenant to another device?
 
Hi Spirit,

OK, this sounds like it probably wont scale to the level I want then assuming I want to build my multi-tenancy to thousands of EVPN based tenants. The only other option I can see would be use a EVPN per tenant but use layer 2 segments with my VYOS router having interfaces in each segment and axting as the default gateway but then there would be an impact to performance as traffic would have to hairpin through the vyos virtual router for each to west traffic.

Assuming each tenant has a transit network segment is there no way to specific a static route within the EVPN tenant to another device?
ok Ive managed to get static routes working in the VRF pointing to a next hop of a device in one of my network segments so I think I'll go with that option then. Thanks JM