SDN routes

gomeology

Member
Aug 6, 2022
3
0
6
So i have been messing with SDN trying to have different vms on different nodes communicate with each other. After a bunch of failures I got it. I basically wanted a locked down subnet internal to proxmox. This is my last issue I need to solve. Both my nodes are dual niced. One for management vmbr0 and one for vm comms (2.5GBs) vmbr1. No matter what Ips I use as peers or if I use a OSPF fabric It still communicates over vmbr0. Any suggestions? Comms work between vms i just want the gateway of those comms to be via vmbr1 on both nodes (which the peers are set to in sdn).

Bash:
cat /etc/network/interfaces.d/sdn
#version:30

auto noint
iface noint
        bridge_ports vxlan_noint
        bridge_stp off
        bridge_fd 0
        mtu 1450

auto vxlan_noint
iface vxlan_noint
        vxlan-id 2000
        vxlan_remoteip 192.168.9.49
        mtu 1450
        
cat /etc/pve/sdn/*.cfg
evpn: evpnctrl
        asn 65000
        peers 192.168.9.49, 192.168.9.50

subnet: vxnoint-10.50.0.1-24
        vnet noint
        dhcp-range start-address=10.50.0.50,end-address=10.50.0.100
        gateway 10.50.0.1

vnet: noint
        zone vxnoint
        tag 2000

vxlan: vxnoint
        ipam pve
        peers 192.168.9.50, 192.168.9.49
 
the vm traffic is goinf through the vxlan (vxlan_remoteip ...), so it's use peers ip address.

if you want to split management && vm traffic on 2 nic, you need to setup 2 differents ip address on each host on different subnet. (1 for management, 1 for peers). Also, don't need vmbr0 or vmbr1 at all, as you use evpn, you can simply setup ip address on nic directly, and remove vmbrX.
 
the vm traffic is goinf through the vxlan (vxlan_remoteip ...), so it's use peers ip address.

if you want to split management && vm traffic on 2 nic, you need to setup 2 differents ip address on each host on different subnet. (1 for management, 1 for peers). Also, don't need vmbr0 or vmbr1 at all, as you use evpn, you can simply setup ip address on nic directly, and remove vmbrX.
I currently have two ips on the same subnet but one is already management the other used specifically for the nics of the vms. same setup on both nodes obviosuly with different ips. The peers as you see above are the ips of the VMs nics on node 1 and 2.

The issue is not setting up management vs VM networks it's telling the vxnet to use vmbr1 ips. Even though they are used in sdn's setup it is defaulting to the management ips on vmbr0
 
Last edited:
funny enough a reboot of each node fixed the routing. interesting....

edit i lied it reverted back. Im lost.
 
Last edited: