Proxmox SDN EVPN: cross-node VM cannot ping, underlay works, L3 EVPN disabled

yuuki08noah

New Member
Feb 12, 2026
9
0
1
Hello,
I’m using Proxmox(9.0.3) SDN with EVPN across multiple nodes. Here’s the situation:
• The underlay network works perfectly between nodes.
• Each node’s VM can communicate with its local VMs.
• L3 EVPN (Type-5) is disabled.
• “Advertise Subnets” is turned off.
• EVPN is configured with the same VXLAN ID across nodes.

However, VMs on different nodes cannot communicate (ping / ICMP fails). Bridge FDBs show entries for remote MACs and remote VTEPs, but traffic still does not reach the destination VM.

What I tried:
• Checked bridge fdb on each node.
• Verified underlay connectivity.
• Confirmed EVPN interface is up and part of the SDN bridge.
• Disabled L3 EVPN and subnet advertisement.
• Ensured rp_filter is 0 and bridge-nf-call-iptables/ip6tables are 0.

Question:
What could prevent cross-node VM connectivity in Proxmox SDN EVPN under these conditions? Are there any additional settings required to allow remote VM communication?
 
Do you have the firewall active?

How does your configuration look like?

Code:
cat /etc/network/interfaces
cat /etc/network/interfaces.d/sdn

ip a
ip r

cat /etc/pve/sdn/zones.cfg
cat /etc/pve/sdn/vnets.cfg
cat /etc/pve/sdn/controllers.cfg

cat /etc/frr/frr.conf

vtysh -c 'show bgp summary'
vtysh -c 'show bgp l2vpn evpn'

Also, the respective configuration of a VM would be interesting:

Code:
qm config <VMID>
 
  • Like
Reactions: yuuki08noah
Do you have the firewall active?

How does your configuration look like?

Code:
cat /etc/network/interfaces
cat /etc/network/interfaces.d/sdn

ip a
ip r

cat /etc/pve/sdn/zones.cfg
cat /etc/pve/sdn/vnets.cfg
cat /etc/pve/sdn/controllers.cfg

cat /etc/frr/frr.conf

vtysh -c 'show bgp summary'
vtysh -c 'show bgp l2vpn evpn'

Also, the respective configuration of a VM would be interesting:

Code:
qm config <VMID>
The firewall is inactive
Code:
ubuntu@k8s-dev-101:~$ sudo ufw status
Status: inactive

Bash:
root@node94:~# cat /etc/network/interfaces

cat /etc/network/interfaces.d/sdn



ip a

ip r



cat /etc/pve/sdn/zones.cfg

cat /etc/pve/sdn/vnets.cfg

cat /etc/pve/sdn/controllers.cfg



cat /etc/frr/frr.conf



vtysh -c 'show bgp summary'

vtysh -c 'show bgp l2vpn evpn'
auto lo
iface lo inet loopback

iface nic0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 10.129.56.94/24
        gateway 10.129.56.1
        bridge-ports nic0
        bridge-stp off
        bridge-fd 0


source /etc/network/interfaces.d/*
# BEGIN ANSIBLE MANAGED BLOCK (SDN LOOPBACK)
auto lo:0
iface lo:0 inet static
    address 10.255.255.94/32
# END ANSIBLE MANAGED BLOCK (SDN LOOPBACK)
#version:10

auto madp
iface madp
        address 172.16.0.1/16
        post-up iptables -t nat -A POSTROUTING -s '172.16.0.0/16' -o vmbr0 -j SNAT --to-source 10.129.56.94
        post-down iptables -t nat -D POSTROUTING -s '172.16.0.0/16' -o vmbr0 -j SNAT --to-source 10.129.56.94
        post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
        post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
        hwaddress BC:24:11:4C:BB:1B
        bridge_ports vxlan_madp
        bridge_stp off
        bridge_fd 0
        mtu 1450
        ip-forward on
        arp-accept on
        vrf vrf_madp

auto vrf_madp
iface vrf_madp
        vrf-table auto
        post-up ip route del vrf vrf_madp unreachable default metric 4278198272

auto vrfbr_madp
iface vrfbr_madp
        bridge-ports vrfvx_madp
        bridge_stp off
        bridge_fd 0
        mtu 1450
        vrf vrf_madp

auto vrfvx_madp
iface vrfvx_madp
        vxlan-id 2
        vxlan-local-tunnelip 10.129.56.94
        bridge-learning off
        mtu 1450

auto vxlan_madp
iface vxlan_madp
        vxlan-id 3
        vxlan-local-tunnelip 10.129.56.94
        bridge-learning off
        mtu 1450
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 10.255.255.94/32 scope global lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: nic0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 1c:69:7a:92:8b:85 brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
    altname enx1c697a928b85
13: tap102i0: <BROADCAST,MULTICAST,PROMISC> mtu 1450 qdisc fq_codel state DOWN group default qlen 1000
    link/ether 5a:37:10:21:75:c2 brd ff:ff:ff:ff:ff:ff
16: tap103i0: <BROADCAST,MULTICAST,PROMISC> mtu 1450 qdisc fq_codel state DOWN group default qlen 1000
    link/ether fa:fe:6a:ac:64:a0 brd ff:ff:ff:ff:ff:ff
17: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 1c:69:7a:92:8b:85 brd ff:ff:ff:ff:ff:ff
    inet 10.129.56.94/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::1e69:7aff:fe92:8b85/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
19: madp: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master vrf_madp state UP group default qlen 1000
    link/ether bc:24:11:4c:bb:1b brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.1/16 scope global madp
       valid_lft forever preferred_lft forever
    inet6 fe80::be24:11ff:fe4c:bb1b/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
20: vrf_madp: <NOARP,MASTER,UP,LOWER_UP> mtu 65575 qdisc noqueue state UP group default qlen 1000
    link/ether b2:4a:df:e3:d9:50 brd ff:ff:ff:ff:ff:ff
22: vrfbr_madp: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master vrf_madp state UP group default qlen 1000
    link/ether ae:48:ed:6d:b3:6f brd ff:ff:ff:ff:ff:ff
    inet6 fe80::3845:1fff:fe26:a723/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
23: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1450 qdisc fq_codel master madp state UNKNOWN group default qlen 1000
    link/ether ce:aa:30:9d:54:6c brd ff:ff:ff:ff:ff:ff
24: vxlan_madp: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master madp state UNKNOWN group default qlen 1000
    link/ether 26:ed:53:65:81:f7 brd ff:ff:ff:ff:ff:ff
25: vrfvx_madp: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master vrfbr_madp state UNKNOWN group default qlen 1000
    link/ether ae:48:ed:6d:b3:6f brd ff:ff:ff:ff:ff:ff
default via 10.129.56.1 dev vmbr0 proto kernel onlink
10.129.56.0/24 dev vmbr0 proto kernel scope link src 10.129.56.94
172.16.0.0/16 nhid 53 dev vrf_madp proto bgp metric 20
172.16.0.104 nhid 68 via 10.129.56.107 dev vrfbr_madp proto bgp metric 20 onlink
172.16.0.106 nhid 68 via 10.129.56.107 dev vrfbr_madp proto bgp metric 20 onlink
evpn: madp
        controller madp
        vrf-vxlan 2
        disable-arp-nd-suppression 1
        exitnodes node94,node107
        exitnodes-primary node107
        ipam pve
        mac BC:24:11:4C:BB:1B
        mtu 1450

vnet: madp
        zone madp
        tag 3

evpn: madp
        asn 65000
        peers 10.129.56.107,10.129.56.94

frr version 10.3.1
frr defaults datacenter
hostname node94
log syslog informational
service integrated-vtysh-config
!
!
vrf vrf_madp
 vni 2
exit-vrf
!
router bgp 65000
 bgp router-id 10.129.56.94
 no bgp hard-administrative-reset
 no bgp default ipv4-unicast
 coalesce-time 1000
 no bgp graceful-restart notification
 neighbor VTEP peer-group
 neighbor VTEP remote-as 65000
 neighbor VTEP bfd
 neighbor 10.129.56.107 peer-group VTEP
 !
 address-family ipv4 unicast
  import vrf vrf_madp
 exit-address-family
 !
 address-family ipv6 unicast
  import vrf vrf_madp
 exit-address-family
 !
 address-family l2vpn evpn
  neighbor VTEP activate
  neighbor VTEP route-map MAP_VTEP_IN in
  neighbor VTEP route-map MAP_VTEP_OUT out
  advertise-all-vni
 exit-address-family
exit
!
router bgp 65000 vrf vrf_madp
 bgp router-id 10.129.56.94
 no bgp hard-administrative-reset
 no bgp graceful-restart notification
 !
 address-family ipv4 unicast
  redistribute connected
 exit-address-family
 !
 address-family ipv6 unicast
  redistribute connected
 exit-address-family
 !
 address-family l2vpn evpn
  default-originate ipv4
  default-originate ipv6
 exit-address-family
exit
!
ip prefix-list only_default seq 1 permit 0.0.0.0/0
!
ipv6 prefix-list only_default_v6 seq 1 permit ::/0
!
route-map MAP_VTEP_IN permit 1
exit
!
route-map MAP_VTEP_OUT permit 1
 match ip address prefix-list only_default
 set metric 200
exit
!
route-map MAP_VTEP_OUT permit 2
 match ipv6 address prefix-list only_default_v6
 set metric 200
exit
!
route-map MAP_VTEP_OUT permit 3
exit
!
line vty
!
L2VPN EVPN Summary:
BGP router identifier 10.129.56.94, local AS number 65000 VRF default vrf-id 0
BGP table version 0
RIB entries 7, using 896 bytes of memory
Peers 1, using 23 KiB of memory
Peer groups 1, using 64 bytes of memory

Neighbor               V         AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt Desc
node107(10.129.56.107) 4      65000      6853      6773      305    0    0 05:12:56            9        6 FRRouting/10.3.1

Total number of neighbors 1
BGP table version is 121, local router ID is 10.129.56.94
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal
Origin codes: i - IGP, e - EGP, ? - incomplete
EVPN type-1 prefix: [1]:[EthTag]:[ESI]:[IPlen]:[VTEP-IP]:[Frag-id]
EVPN type-2 prefix: [2]:[EthTag]:[MAClen]:[MAC]:[IPlen]:[IP]
EVPN type-3 prefix: [3]:[EthTag]:[IPlen]:[OrigIP]
EVPN type-4 prefix: [4]:[ESI]:[IPlen]:[OrigIP]
EVPN type-5 prefix: [5]:[EthTag]:[IPlen]:[IP]

   Network          Next Hop            Metric LocPrf Weight Path
Route Distinguisher: 10.129.56.94:2
 *>  [2]:[0]:[48]:[bc:24:11:fb:56:a3]
                    10.129.56.94(node94)
                                                       32768 i
                    ET:8 RT:65000:3
 *>  [2]:[0]:[48]:[bc:24:11:fb:56:a3]:[32]:[172.16.0.101]
                    10.129.56.94(node94)
                                                       32768 i
                    ET:8 RT:65000:3 RT:65000:2 Rmac:ae:48:ed:6d:b3:6f
 *>  [2]:[0]:[48]:[bc:24:11:fb:56:a3]:[128]:[fe80::be24:11ff:fefb:56a3]
                    10.129.56.94(node94)
                                                       32768 i
                    ET:8 RT:65000:3
 *>  [3]:[0]:[32]:[10.129.56.94]
                    10.129.56.94(node94)
                                                       32768 i
                    ET:8 RT:65000:3
Route Distinguisher: 10.129.56.94:3
 *>  [5]:[0]:[0]:[0.0.0.0]
                    10.129.56.94(node94)
                                                       32768 i
                    ET:8 RT:65000:2 Rmac:ae:48:ed:6d:b3:6f
 *>  [5]:[0]:[0]:[::] 10.129.56.94(node94)
                                                       32768 i
                    ET:8 RT:65000:2 Rmac:ae:48:ed:6d:b3:6f
Route Distinguisher: 10.129.56.107:2
 *>i [2]:[0]:[48]:[bc:24:11:6a:68:77]
                    10.129.56.107(node107)
                                                  100      0 i
                    RT:65000:3 ET:8
 *>i [2]:[0]:[48]:[bc:24:11:6a:68:77]:[32]:[172.16.0.104]
                    10.129.56.107(node107)
                                                  100      0 i
                    RT:65000:2 RT:65000:3 ET:8 Rmac:ea:d2:fa:e8:e2:24
 *>i [2]:[0]:[48]:[bc:24:11:6a:68:77]:[128]:[fe80::be24:11ff:fe6a:6877]
                    10.129.56.107(node107)
                                                  100      0 i
                    RT:65000:3 ET:8
 *>i [2]:[0]:[48]:[bc:24:11:9a:b9:d2]
                    10.129.56.107(node107)
                                                  100      0 i
                    RT:65000:3 ET:8
 *>i [2]:[0]:[48]:[bc:24:11:9a:b9:d2]:[32]:[172.16.0.106]
                    10.129.56.107(node107)
                                                  100      0 i
                    RT:65000:2 RT:65000:3 ET:8 Rmac:ea:d2:fa:e8:e2:24
 *>i [2]:[0]:[48]:[bc:24:11:9a:b9:d2]:[128]:[fe80::be24:11ff:fe9a:b9d2]
                    10.129.56.107(node107)
                                                  100      0 i
                    RT:65000:3 ET:8
 *>i [3]:[0]:[32]:[10.129.56.107]
                    10.129.56.107(node107)
                                                  100      0 i
                    RT:65000:3 ET:8
Route Distinguisher: 10.129.56.107:4
 *>i [5]:[0]:[0]:[0.0.0.0]
                    10.129.56.107(node107)
                                                  100      0 i
                    RT:65000:2 ET:8 Rmac:ea:d2:fa:e8:e2:24
 *>i [5]:[0]:[0]:[::] 10.129.56.107(node107)
                                                  100      0 i
                    RT:65000:2 ET:8 Rmac:ea:d2:fa:e8:e2:24

Displayed 15 out of 15 total prefixes

Bash:
root@node94:~# qm config 101
agent: 1
balloon: 0
bios: seabios
boot: order=scsi0;net0
cicustom: 
cipassword: **********
ciupgrade: 0
ciuser: ubuntu
cores: 2
cpu: host
description: Managed by Terraform.
hotplug: network,disk,usb
ide2: vm-os:vm-101-cloudinit,media=cdrom,size=4M
ipconfig0: ip=172.16.0.101/16,gw=172.16.0.1
kvm: 1
memory: 4096
meta: creation-qemu=10.0.2,ctime=1768878313
name: k8s-dev-101
nameserver: 211.182.233.2
net0: virtio=BC:24:11:FB:56:A3,bridge=madp
numa: 0
onboot: 0
protection: 0
scsi0: vm-os:base-9000-disk-0/vm-101-disk-1,iothread=1,replicate=0,size=32G
scsihw: virtio-scsi-single
serial0: socket
smbios1: uuid=a431251b-ca6e-4b35-a6d4-ccd13365dbf3
sockets: 1
 
The firewall is inactive
On the PVE host as well?

Which VMs (please provide the IPs) are not able to ping each other?
Can you check via tcpdump on both nodes if traffic leaves / arrives there?

Can you also provide the output of the following command from both involved nodes?

Code:
ip r s vrf vrf_madp
 
Last edited:
On the PVE host as well?

Which VMs (please provide the IPs) are not able to ping each other?
Can you check via tcpdump on both nodes if traffic leaves / arrives there?

Can you also provide the output of the following command from both involved nodes?

Code:
ip r s vrf vrf_madp
On the host as well, the firewall is inactive.

node107 -> VMs on local(104, 105, 106) is working
but node107 -> VMs on another node(101, 102, 103) is not working

also

node94 -> VMs on local(101, 102, 103) is working
but node94 -> VMs on another one(104, 105, 106) is not working

I've checked tcpdump of vm 101, ping sent from node107

Code:
18:37:49.200951 vrfvx_madp In  IP 172.16.255.107 > 172.16.0.101: ICMP echo request, id 41, seq 10, length 64

18:37:49.200952 vrfbr_madp In  IP 172.16.255.107 > 172.16.0.101: ICMP echo request, id 41, seq 10, length 64

18:37:50.224944 vrfvx_madp In  IP 172.16.255.107 > 172.16.0.101: ICMP echo request, id 41, seq 11, length 64

18:37:50.224945 vrfbr_madp In  IP 172.16.255.107 > 172.16.0.101: ICMP echo request, id 41, seq 11, length 64


Bash:
root@node94:~# ip r s vrf vrf_madp
default nhid 68 via 10.129.56.107 dev vrfbr_madp proto bgp metric 20 onlink
172.16.0.0/16 dev madp proto kernel scope link src 172.16.0.1
172.16.0.105 nhid 68 via 10.129.56.107 dev vrfbr_madp proto bgp metric 20 onlink

Bash:
root@node107:~# ip r s vrf vrf_madp
172.16.0.0/16 dev madp proto kernel scope link src 172.16.0.1
 
It looks like there's no route for the particular VM with IP 172.16.0.101 - does it work if you ping the anycast gateway (172.16.0.1) from both VMs first and only then try pinging between them? The host needs to learn the IP / MAC combo of the VM (this happens via the neighbor table), otherwise it cannot announce a type-2 route.