[SOLVED] Management plan vs VM on overlay

cyruspy

Renowned Member
Jul 2, 2013
103
7
83
Hello!,

I'm trying to integrate PVE via OIDC to a Keycloak server. The thing is, the server is running as a VM on top of the same cluster and is a client of a EVPN/VXLAN VNI/Subnet.

Even though the anycast GW is attached to a VRF, the traffic originated from the Management plane seems to exit through the directly attached interface, even though it should be isolated.

Am I missing anything here?

Topology:

PVE Management <-- VLAN --> switch <--VLAN--> vFW <--BGP--> PVE node Exit nodes

Instead of going all the way throught the peering point, the PVE node is taking a shortcut via the local interface.

tcpdump shows the "gateway" trying to reach the web servers,
 
Last edited:
Just checked:

net.ipv4.tcp_l3mdev_accept = 0

The local processes bound to default/global VRF should not touch forward requests through VRF based interfaces.

1- Want: FRR working with VRFs
2- Don't want PVEProxy going out through a VRF interface
 
Anybody?.

Today I found something odd. Having 2 subnets in the overlay, same VRF:

VM1 on subnet1 can reach proxmox web portal and SSH on host1 (routing is working as it should)
- Traffic properly routed when VM initiates de connection?

host1 cannot connect to VM2 running a web service on subnet2
- Traffic not properly routed when initiated by host?

subnet1 & subnet2 are served by PVE overlay (EVPN/VXLAN).
 
Last edited:
Bash:
root@pve-01:~/bin# ip addr show dev ol111001
191: ol111001: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue master vrf_L01VPN01 state UP group default qlen 1000
   link/ether bc:24:11:e6:34:58 brd ff:ff:ff:ff:ff:ff
   inet 192.168.111.1/25 scope global ol111001
      valid_lft forever preferred_lft forever
   inet6 fe80::be24:11ff:fee6:3458/64 scope link
      valid_lft forever preferred_lft forever

root@pve-01:~/bin# ip addr show dev ol107001
63: ol107001: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
master vrf_SDCVPN01 state UP group default qlen 1000
   link/ether bc:24:11:a9:f9:46 brd ff:ff:ff:ff:ff:ff
   inet 192.168.107.1/27 scope global ol107001
      valid_lft forever preferred_lft forever
   inet6 fe80::be24:11ff:fea9:f946/64 scope link
      valid_lft forever preferred_lft forever

root@dev-sdc-pve-01:~/bin# ip route get 192.168.111.10
192.168.111.10 dev ol111001 src 192.168.111.1 uid 0
    cache

root@dev-sdc-pve-01:~/bin# ip route get 192.168.107.10
192.168.107.10 dev ol107001 src 192.168.107.1 uid 0
cache

Bash:
root@pve-01:~/bin# sysctl  net.ipv4|grep l3
net.ipv4.raw_l3mdev_accept = 0
net.ipv4.tcp_l3mdev_accept = 0
net.ipv4.udp_l3mdev_accept = 0

Bash:
root@pve-01:~/bin# ip rule
1000:   from all lookup [l3mdev-table]
32765:  from all lookup local
32766:  from all lookup main
32767:  from all lookup default

Configuration:

"ip r" output: https://pastebin.com/Q9sF8uMv
"frr.conf.local" content: https://pastebin.com/KAqNqKB1
rendered "frr.conf": https://pastebin.com/gUpYnuc0
/etc/pve/sdn/*: https://pastebin.com/U7yjNe5N
"/etc/network/interfaces" for pve-01: https://pastebin.com/smEfYUJw
 
Last edited:
Fixed!.

The trick was removing the exit nodes (all of them) from the EVPN zone, since it leaks routes to the global/default table.

That is paired with the manual BGP instances + VRF definition for the interface and traffic flow works as expected.