proxmox 7.0 sdn beta test

it'll be possible to use internal ipam, or other external ipam (phpipam,netbox current) to manage auto ip address assignment,
I was hoping for this! As I currently have planned most of their network via Netbox already, including IP assignments to nics.

Personnaly, if you want to do simple vlans and don't have physical gateway, I'll go to a simple pair a vm as gateway (pfsense or other). Until you need to route of lot of bandwidth or pps, vms are fine.
Yeah, that was the current plan. Bandwidth/PPS-wise though, it would be nice to do NIDS/IPS via 10G links to monitor internet networks.

Thank you!
 
I'm back! After having to perform disaster recovery on my lab I have gotten things back up and running.

Right now, I have 3 Proxmox nodes in a cluster running a evpn zone. There are two vnets each with their own vnet. Each vnet has a single subnet which has its own gateway.

I can ping from a vm on one vnet/subnet to a vm on another vnet/subnet. I can also ping both gateway ip addresses from both vms in both vnets.

Where I am now stuck is how to allow these vms to talk to the outside world. In the documentation, it states that traffic should be directed to an exit node which will then be directed to that node's default gateway. The documentation also states that routes will be needed in the gateway router to direct traffic back to the exit node. I have not been able to get this part of the process to work.

One option I see is to have a vm running pfsense/opnsense that is connected to both the zone and a externally connected bridge. This will allow that vm to route between the evpn zone and the outside internet.

The other, more preferable, option would be to either allow proxmox to route the traffic to my gateway router, or to somehow connect that gateway router into the evpn zone, if that is even possible.

Is there any guidance as to which direction I should take things in?
 
Where I am now stuck is how to allow these vms to talk to the outside world. In the documentation, it states that traffic should be directed to an exit node which will then be directed to that node's default gateway. The documentation also states that routes will be needed in the gateway router to direct traffic back to the exit node. I have not been able to get this part of the process to work.

The other, more preferable, option would be to either allow proxmox to route the traffic to my gateway router, or to somehow connect that gateway router into the evpn zone, if that is even possible.

Is there any guidance as to which direction I should take things in?
you can define 1 or more proxmox nodes as exit gateway, in the zone configuration.
The traffic will be routed through this node.

for example:

from evpn to external world
------------------------------------------
vm(192.168.0.1)----vnet1----192.168.0.254(node1)--------(type5 route 0.0.0.0)---------------(exit gateway 192.168.0.254)---->node2(10.0.01)-------default node2 gateway route---------------------> 10.0.0.254(your external router)

from external world to evpn
-----------------------------------------

your external router(10.0.0.254)------static route (ip route add 192.168.0.0/24 via 10.0.0.1)------->(10.0.0.1)node2---->-(exit gateway 192.168.0.254)---->192.168.0.254(node1)------vnet1-----vm(192.168.0.1)




One option I see is to have a vm running pfsense/opnsense that is connected to both the zone and a externally connected bridge. This will allow that vm to route between the evpn zone and the outside internet.
afaik, pfsense/opnsense (freebd) don't support evpn yet. So they can't announce the default type5 route inside the evpn network. (I known that vyos have prelimary support since some months).

It's on the roadmap to have a special "proxmox vm gateway", working with all different kind of zones model, but I'll take some time before finish it.
 
you can define 1 or more proxmox nodes as exit gateway, in the zone configuration.
The traffic will be routed through this node.

for example:

from evpn to external world
------------------------------------------
vm(192.168.0.1)----vnet1----192.168.0.254(node1)--------(type5 route 0.0.0.0)---------------(exit gateway 192.168.0.254)---->node2(10.0.01)-------default node2 gateway route---------------------> 10.0.0.254(your external router)

from external world to evpn
-----------------------------------------

your external router(10.0.0.254)------static route (ip route add 192.168.0.0/24 via 10.0.0.1)------->(10.0.0.1)node2---->-(exit gateway 192.168.0.254)---->192.168.0.254(node1)------vnet1-----vm(192.168.0.1)





afaik, pfsense/opnsense (freebd) don't support evpn yet. So they can't announce the default type5 route inside the evpn network. (I known that vyos have prelimary support since some months).

It's on the roadmap to have a special "proxmox vm gateway", working with all different kind of zones model, but I'll take some time before finish it.

Here's some extra info just in case it helps.

Physical network layout:
ISP GW (192.168.10.1) <--> (192.168.10.50) Router
Router (192.168.20.1) <--> Switch
Switch <--> (192.168.20.10) Node 1
Switch <--> (192.168.20.11) Node 2
Switch <--> (192.168.20.12) Node 3

Node 2 [Exit Gateway] (192.168.30.1) <--> (192.168.30.10) VM

The VM can ping the exit gateway, but nothing else.
No other device can ping the exit gateway IP, including Node2.

Node 2 can ping the VM, but nothing else can.
If I add a route to Node 1 (ip route add 192.168.30.0/24 via 192.168.20.11), it cannot reach the subnet or exit gateway.

Additionally, if I add a similar route to my router (FreeBSD) it cannot reach the subnet or exit gateway.

Nothing I try seems to let the VM reach the world outside the vnet.
 
Here's some extra info just in case it helps.

Physical network layout:
ISP GW (192.168.10.1) <--> (192.168.10.50) Router
Router (192.168.20.1) <--> Switch
Switch <--> (192.168.20.10) Node 1
Switch <--> (192.168.20.11) Node 2
Switch <--> (192.168.20.12) Node 3

Node 2 [Exit Gateway] (192.168.30.1) <--> (192.168.30.10) VM

The VM can ping the exit gateway, but nothing else.
No other device can ping the exit gateway IP, including Node2.

Node 2 can ping the VM, but nothing else can.
If I add a route to Node 1 (ip route add 192.168.30.0/24 via 192.168.20.11), it cannot reach the subnet or exit gateway.

Additionally, if I add a similar route to my router (FreeBSD) it cannot reach the subnet or exit gateway.

Nothing I try seems to let the VM reach the world outside the vnet.
ok, so you have something like:
ex: vm is running on node1, and node2 is configured as exit gateway in the evpn zone, nodes defaut gw are your router 192.168.20.1

from evpn to external world (this direction should works out of box if you define exit gateway in the zone)
-----------------------------------------
vm(192.168.30.10)---->192.168.30.1(node1)--------------->192.168.30.1(node2)----192.168.20.11---defautgw------>192.168.20.1(router)192.168.10.50-

So, for the reverse way, you need to add a route to 192.168.30.0/24 in your router (and only here)


from external world to evpn (add a route in your router )
-----------------------------------------
(router)192.168.20.1----route to 192.168.30.0/24-------->192.168.20.11(node2)-192.168.30.1------------->192.168.30.1(node1)---->vm(192.168.30.10)



you should be able to ping from vm 192.168.30.10 to 192.168.20.1 in both direction.
 
Hi Alexandre,

I try to get my feet wet in SDN however, there is one thing where I don't know how it is supposed to be now.
After migrating my OVS-setup to linux-bridge-setup I have the following situation:
I like to use a VLAN on the proxmox-host and to the VMs. Traffic is coming in tagged. The error occured when doing this with VLAN-tag 1 but the same thing happens when using another tag, so it isn't because of tag 1.

/etc/network/interfaces:

Code:
auto eno1
iface eno1 inet manual
        mtu 9000


auto eno2
iface eno2 inet manual
        mtu 9000


auto bond0
iface bond0 inet manual
        bond-slaves eno1 eno2
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3
        mtu 9000


auto vmbr1
iface vmbr1 inet manual
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 9000


auto vl47host
iface vl47host inet static
        address 10.47.47.47/24
        mtu 9000
        vlan-id 47
        vlan-raw-device vmbr1

/etc/network/interfaces.d/sdn:

Code:
auto vl47sdn
iface vl47sdn
        bridge_ports vmbr1.47
        bridge_stp off
        bridge_fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        alias VLAN 47 per SDN on vmbr1

auto vmbr1
iface vmbr1
        bridge-vlan-protocol 802.1q

The error that occurs:
Code:
root@X19-5:~# ifreload -a
warning: bond0: attribute bond-min-links is set to '0'
error: netlink: vmbr1.47: cannot enslave link vmbr1.47 to vl47sdn: operation failed with 'No such device' (19)
error: vl47sdn: bridge port vmbr1.47 does not exist

When I "rewrite" the vlan-tag-bridge from
Code:
auto vl47host
iface vl47host inet static
        address 10.47.47.47/24
        mtu 9000
        vlan-id 47
        vlan-raw-device vmbr1
to
Code:
auto vmbr1.47
iface vmbr1.47 inet static
        address 10.47.47.47/24
        mtu 9000

ifreload -a works fine, but:
Code:
root@X19-5:~# ip a s vmbr1.47
35: vmbr1.47@vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vl47sdn state UP group default qlen 1000
    link/ether ee:97:cf:b7:cc:cf brd ff:ff:ff:ff:ff:ff
the IP of the bridge is gone.

Am I overlooking something supidly simple here or have I just not understood how it should be configured or do I have made an edgecase here that is bad practice? Or is it because of Proxmox-Beta?

Code:
proxmox-ve: 7.0-2 (running kernel: 5.11.22-1-pve)
pve-manager: 7.0-5 (running version: 7.0-5/cce9b25f)
pve-kernel-5.11: 7.0-3
pve-kernel-helper: 7.0-3
pve-kernel-5.4: 6.4-4
pve-kernel-5.11.22-1-pve: 5.11.22-1
pve-kernel-5.4.124-1-pve: 5.4.124-1
ceph: 16.2.4-pve1
ceph-fuse: 16.2.4-pve1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.0.0-1+pve5
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 7.0-3
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-4
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-network-perl: 0.6.0
libpve-storage-perl: 7.0-6
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-1
lxcfs: 4.0.8-pve1
novnc-pve: 1.2.0-3
openvswitch-switch: 2.15.0+ds1-2
proxmox-backup-client: 1.1.10-1
proxmox-backup-file-restore: 1.1.10-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.1-4
pve-cluster: 7.0-2
pve-container: 4.0-3
pve-docs: 7.0-3
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.2-2
pve-i18n: 2.3-1
pve-qemu-kvm: 6.0.0-2
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-5
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.4-pve1

Thanks for this great powerful extension to Proxmox!
 
When I "rewrite" the vlan-tag-bridge from
Code:
auto vl47host
iface vl47host inet static
        address 10.47.47.47/24
        mtu 9000
        vlan-id 47
        vlan-raw-device vmbr1
to
Code:
auto vmbr1.47
iface vmbr1.47 inet static
        address 10.47.47.47/24
        mtu 9000
yes, it should be this, because I think you can't have 2 differents named interfaces with same vlan .


ifreload -a works fine, but:
Code:
root@X19-5:~# ip a s vmbr1.47
35: vmbr1.47@vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue master vl47sdn state UP group default qlen 1000
    link/ether ee:97:cf:b7:cc:cf brd ff:ff:ff:ff:ff:ff
the IP of the bridge is gone.

This is expected, because you can't have an ip address on an interface enslaved in a bridge. (so the ip should be move to vl47sdn bridge instead)
Currently I can't manage local ip address of a specific node in sdn, but it's possible manually :



in your /etc/network/interfaces
Code:
auto vl47sdn
iface vl47sdn inet static
      bridge_ports vmbr1.47
      bridge_stp off
      bridge_fd 0
     address 10.47.47.47/24

This should be merged with the sdn config in /etc/network/interfaces.d/sdn
 
  • Like
Reactions: nibblerrick
yes, it should be this, because I think you can't have 2 differents named interfaces with same vlan .
And you rely on this naminscheme from the sdn-scripts, makes sense, therefore I tried that version. Thanks.

This is expected, because you can't have an ip address on an interface enslaved in a bridge. (so the ip should be move to vl47sdn bridge instead)
Currently I can't manage local ip address of a specific node in sdn, but it's possible manually :
Ah, that was the part I didn't know and that explains it, thanks so much!

in your /etc/network/interfaces
Code:
auto vl47sdn
iface vl47sdn inet static
      bridge_ports vmbr1.47
      bridge_stp off
      bridge_fd 0
     address 10.47.47.47/24

This should be merged with the sdn config in /etc/network/interfaces.d/sdn
Looks good.
So the only thing I have to take care of then is not chaning/deleting that interface in sdn. But as the only case will be with the vlan-tag 1 in the end which will be configured in sdn and local addresses on the nodes it shouldn't be any problem. All other vlans are either only for the host or only for virtual machines.

Thanks again, the explanation that an interface enslaved by a bridge can't have an IP clarifies the whole thing.
 
Hello, I have recently installed Proxmox 7.0-10 and am having issues getting a VM in an evpn network talk to the external network.
I currently have a 1 node setup with 2 vnets that each has 1 subnet and gateway defined. I am able to ping between the VMs on different subnets, but if I try to ping the external gateway no traffic is sent out of the NIC. Also, I have on my external router static routes that route back to the 10.2.0.0/24 vnet subnet.

Would someone be able to help me understand why traffic is never leaving the NIC of my Proxmox node?

`/etc/network/interfaces`:

Code:
source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
iface eno3 inet manual
iface eno4 inet manual
auto vmbr0
iface vmbr0 inet static
    address 10.10.0.3/24
    gateway 10.10.0.254
    bridge-ports eno3
    bridge-stp off
    bridge-fd 0
    mtu 1500

`/etc/network/interfaces.d/sdn`:
Code:
#version:17
auto myvnet1
iface myvnet1
    address 10.2.0.254/24
    hwaddress 3E:16:3E:9F:AF:8D
    bridge_ports vxlan_myvnet1
    bridge_stp off
    bridge_fd 0
    mtu 1450
    ip-forward on
    arp-accept on
    vrf vrf_evpnzone
auto myvnet2
iface myvnet2
    address 10.3.0.254/24
    post-up iptables -t nat -A POSTROUTING -s '10.3.0.0/24' -o vmbr0 -j SNAT --to-source 10.10.0.3
    post-down iptables -t nat -D POSTROUTING -s '10.3.0.0/24' -o vmbr0 -j SNAT --to-source 10.10.0.3
    post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1
    post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
    hwaddress 3E:16:3E:9F:AF:8D
    bridge_ports vxlan_myvnet2
    bridge_stp off
    bridge_fd 0
    mtu 1450
    ip-forward on
    arp-accept on
    vrf vrf_evpnzone
auto vrf_evpnzone
iface vrf_evpnzone
    vrf-table auto
    post-up ip route add vrf vrf_evpnzone unreachable default metric 4278198272
auto vrfbr_evpnzone
iface vrfbr_evpnzone
    bridge-ports vrfvx_evpnzone
    bridge_stp off
    bridge_fd 0
    mtu 1450
    vrf vrf_evpnzone
auto vrfvx_evpnzone
iface vrfvx_evpnzone
    vxlan-id 10000
    vxlan-local-tunnelip 10.10.0.3
    bridge-learning off
    bridge-arp-nd-suppress on
    mtu 1450
auto vxlan_myvnet1
iface vxlan_myvnet1
    vxlan-id 11000
    vxlan-local-tunnelip 10.10.0.3
    bridge-learning off
    bridge-arp-nd-suppress on
    mtu 1450
auto vxlan_myvnet2
iface vxlan_myvnet2
    vxlan-id 12000
    vxlan-local-tunnelip 10.10.0.3
    bridge-learning off
    bridge-arp-nd-suppress on
    mtu 1450

`/etc/pve/sdn/controllers.cfg`:

Code:
evpn: evpnctl
    asn 65000
    peers 10.10.0.3

`/etc/pve/sdn/subnets.cfg`:

Code:
subnet: evpnzone-10.2.0.0-24
    vnet myvnet1
    gateway 10.2.0.254
subnet: evpnzone-10.3.0.0-24
    vnet myvnet2
    gateway 10.3.0.254
    snat 1

`/etc/pve/sdn/vnets.cfg`:

Code:
vnet: myvnet1
    zone evpnzone
    tag 11000
vnet: myvnet2
    zone evpnzone
    tag 12000


`/etc/pve/sdn/zones.cfg`:

Code:
evpn: evpnzone
    controller evpnctl
    vrf-vxlan 10000
    exitnodes pve3
    ipam pve
    mac 3E:16:3E:9F:AF:8D
    mtu 1450
 
@frybin

yes, can you cehck what if your frr package version ?

"dpkg -l|grep frr" ?

(when proxmox7 has been released, frr pve package version was missing, so this was the debian version pushed).

It has been fixed last week.




Also, I have on my external router static routes that route back to the 10.2.0.0/24 vnet subnet.
do you have setup in your external router, a route like "route add 10.2.0.0/24 gw 10.10.0.3/24) ?


does it works for 10.3.0.0/24 where you have enabled snat ? (so it should be natted to 10.10.0.3)
 
Last edited:
@spirit

Running `dpkg -l|grep frr` results in
Code:
ii  frr                                  7.5.1-1+pve                    amd64        FRRouting suite of internet protocols (BGP, OSPF, IS-IS, ...)
ii  frr-pythontools                      7.5.1-1+pve                    all          FRRouting suite - Python tools

As for the external route, yes it's something like you said and even with snat enabled, the 10.3.0.0/24 still can't reach out.
It's weird since looking at pcaps and tcpdumps, the node receives external traffic for the 10.2.0.0/24 subnet but it's never forwarded to the actual network/vm. And from inside the network, 10.2.0.254 receives the traffic from 10.2.0.100 but the traffic never leaves the network, to the point I can't even ping 10.10.0.3 from 10.2.0.100. But I can easily ping across the 10.2.0.0/24 and 10.3.0.0/24 networks.
 
@spirit

For some reason, it is showing the default route as unreachable which is probably the reason why I'm getting this issue.

/etc/frr/frr.conf:
Code:
log syslog informational
ip forwarding
ipv6 forwarding
frr defaults datacenter
service integrated-vtysh-config
hostname pve3
!
!
vrf vrf_evpnzone
 vni 10000
exit-vrf
!
router bgp 65000
 bgp router-id 10.10.0.3
 no bgp default ipv4-unicast
 coalesce-time 1000
 neighbor VTEP peer-group
 neighbor VTEP remote-as 65000
 neighbor VTEP bfd
 !
 address-family ipv4 unicast
  import vrf vrf_evpnzone
 exit-address-family
 !
 address-family ipv6 unicast
  import vrf vrf_evpnzone
 exit-address-family
 !
 address-family l2vpn evpn
  neighbor VTEP activate
  advertise-all-vni
 exit-address-family
!
router bgp 65000 vrf vrf_evpnzone
 !
 address-family ipv4 unicast
  redistribute connected
 exit-address-family
 !
 address-family ipv6 unicast
  redistribute connected
 exit-address-family
 !
 address-family l2vpn evpn
  default-originate ipv4
  default-originate ipv6
 exit-address-family
!
line vty
!

show ip route:
Code:
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       > - selected route, * - FIB route, q - queued, r - rejected, b - backup
B>* 10.2.0.0/24 [200/0] is directly connected, myvnet1 (vrf vrf_evpnzone), weight 1, 1d17h55m
B>* 10.3.0.0/24 [200/0] is directly connected, myvnet2 (vrf vrf_evpnzone), weight 1, 1d17h55m
C>* 10.10.0.0/24 is directly connected, vmbr0, 1d17h55m

show ip route vrf vrf_evpnzone:
Code:
K>* 0.0.0.0/0 [255/8192] unreachable (ICMP unreachable), 1d17h55m
C>* 10.2.0.0/24 is directly connected, myvnet1, 1d17h55m
C>* 10.3.0.0/24 is directly connected, myvnet2, 1d17h55m
 
mmm, can you try to remove "post-up ..." line from

"
iface vrf_evpnzone
vrf-table auto
post-up ip route add vrf vrf_evpnzone unreachable default metric 4278198272
"

then ifreload -a ?

+

" ip route del vrf vrf_evpnzone unreachable default metric 4278198272"


Edit:
I have tested on my side, and this rule indeed, is blocking routing between the vrf and the default vrf. withtout this rules, you should be able to ping "10.10.0.3" from a vm in 10.2.0.0/24 or 10.3.0.0/24.

This rule was added some month ago, for security, but it shouldn't be present when a host is an exit-node.
 
Last edited:
Thank you @spirit for all the help, it seems to be working now. Can't wait to play around more with this feature.

Also, I realized that the post-down rules aren't being executed when the ifreload -a command is being run and is just appending the same rule to the iptables table. Not sure how much of an issue this can be but just wanted to bring it up. The screenshot below show's what I am talking about.


1626798615726.png
 
Thank you @spirit for all the help, it seems to be working now. Can't wait to play around more with this feature.
ok perfect. (I'm currently in holiday, but I'll try to send a fix soon. Thanks again for to have reported this bug !)

you can fix it manually by editing on the exit node
/usr/share/perl5/PVE/Network/SDN/Zones/EvpnPlugin.pm

and remove
Code:
       push @iface_config, "post-up ip route add vrf $vrf_iface unreachable default metric 4278198272";


Also, I realized that the post-down rules aren't being executed when the ifreload -a command is being run and is just appending the same rule to the iptables table. Not sure how much of an issue this can be but just wanted to bring it up. The screenshot below show's what I am talking about.
mmm, I'm not sure how it's working on reload (as the interface is not shutted down but only reload, maybe post-down is not called)

 
Hello Again,

I seem to have found another bug where VMs inside an SDN network is unable to connect to the exit node of that network, at least that's what it looks like with my 1 node setup.
So I am able to ping the exit nodes from the VMs but if I try to access the WebUI or SSH into the PVE node then I immediately get sent TCP Resets by the node.

Sincerly,
Fred
 
Hello Again,

I seem to have found another bug where VMs inside an SDN network is unable to connect to the exit node of that network, at least that's what it looks like with my 1 node setup.
So I am able to ping the exit nodes from the VMs but if I try to access the WebUI or SSH into the PVE node then I immediately get sent TCP Resets by the node.

Sincerly,
Fred
Hi, do you mean , access from the vm to the exit node ssh or webui ? If yes , this is expected as node services are running in a différent vrf. Exit node is only used as router
 
Hi, do you mean , access from the vm to the exit node ssh or webui ? If yes , this is expected as node services are running in a différent vrf. Exit node is only used as router
What I mean is I have a VM in a evpn zone with the IP range 10.2.0.0/24 and the gateway being 10.2.0.254. I have a pve node which is also the exit node for that zone. When I try to ping 10.10.0.3 from let's say a VM with the IP 10.2.0.1 it works. But, when I try to ssh or connect to 10.10.0.3 from 10.2.0.1 I get instant tcp reset which doesn't allow me to connect to 10.10.0.3. This becomes an issue when you have a VPN VM inside the SDN network and can't access the node. Also as a note, other PVE hosts that are not joined on this cluster but are in the same manager network are able to be accessed just fine.
 
Last edited:
What I mean is I have a VM in a evpn zone with the IP range 10.2.0.0/24 and the gateway being 10.2.0.254. I have a pve node which is also the exit node for that zone. When I try to ping 10.10.0.3 from let's say a VM with the IP 10.2.0.1 it works. But, when I try to ssh or connect to 10.10.0.3 from 10.2.0.1 I get instant tcp reset which doesn't allow me to connect to 10.10.0.3. This becomes an issue when you have a VPN VM inside the SDN network and can't access the node. Also as a note, other PVE hosts that are not joined on this cluster but are in the same manager network are able to be accessed just fine.
ok, got it. do you use proxmox firewall on theses nodes ? (I'm not sure from where is coming the tcp reset). The routing seem to be ok.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!