Greetings,
I apologize for posting in an old topic, but I would like to share a solution to similar problem.
We had a similar issue with one of our five-server cluster communicating trough BGP-EVPN fabric, where 20 Gbit/s links were operating at only ~4-5 Gbit/s through a VXLAN tunnel between two servers using
iperf3. The reason for this behavior is that each Proxmox node does not learn the MAC addresses of the other nodes, causing the traffic pushed into vxlan tunnel to flood to every node.
To debug this, you can use command: bridge fdb |grep [vxlan interface]
To fix this issue, add
advertise-svi-ip under bgp configuration address-family l2vpn evpn (FRR).
Here are example configs:
/etc/network/interfaces:
Code:
auto br_ceph
iface br_ceph inet manual
address [SVI IP]
bridge_stp off
bridge-ports none
bridge-fd 0
auto vxlan666
iface vxlan666 inet manual
pre-up ip link add vxlan666 type vxlan id 666 dstport 4789 local [LOOPBACK IP] nolearning
pre-up ip link set dev vxlan666 master br_ceph
pre-up ip link set up dev vxlan666
post-up ip link set mtu 9000 dev vxlan666
frr:
Code:
router bgp 65002
bgp router-id [LOOPBACK IP]
bgp graceful-restart-disable
neighbor LEAF peer-group
neighbor LEAF remote-as 65001
neighbor LEAF capability dynamic
neighbor [ IP] peer-group LEAF
neighbor [ IP] peer-group LEAF
!
address-family ipv4 unicast
network [LOOPBACK IP]/32
neighbor LEAF allowas-in
maximum-paths 8
exit-address-family
!
address-family l2vpn evpn
neighbor LEAF activate
neighbor LEAF allowas-in
advertise-all-vni
advertise-svi-ip
advertise ipv4 unicast
exit-address-family
exit