proxmox 7.0 sdn beta test

Hi all,

i've just started with playing around with the SDN software and reading this thread not sure if this is a bug, or operator error,

however I ran into the following

I have two nodes in a cluster
hv01
hv02

I tried setting up a vxlan using the same controller and this all worked and I saw VNET1 appear on all nodes
I subsequently tried setting up bgp evpn which I could push no issues and i see VNET02 appear on all nodes, however when I then tries to bind the new vnet to a container I get an error:


run_buffer: 314 Script exited with status 2
lxc_create_network_priv: 3068 No such device - Failed to create network device
lxc_spawn: 1786 Failed to create the network
__lxc_start: 1999 Failed to spawn container "147"
TASK ERROR: startup for container '147' failed

So i figured maybe as this is a beta this might be a bug and i reverted the configuration back to VXLAN only, using VNET1 and the above error remains. I can now only use default bridges on both my nodes.

I have removed both ovswitch and the perl script however the error remains. Is this a known issue? or is this just me being daft somehow?

Happy to help push this functionality further as it is indeed pretty brilliant,

Thanks
you can do a "brctl show" on the node where the lxc ct does not start ? Do you have same problem with a vm ? (This look like the vnet bridge is not created on the host, but you should have an error when applying sdn configuration).

do you use proxmox 6.4 with last update ?
 
Last edited:
you can do a "brctl show" on the node where the lxc ct does not start ? Do you have same problem with a vm ? (This look like the vnet bridge is not created on the host, but you should have an error when applying sdn configuration).

do you use proxmox 6.4 with last update ?
bridge name bridge id STP enabled interfaces
vmbr0 8000.00fd45fcf384 no eno1
vmbr10 8000.00fd45fcf384 no eno1.10
vmbr30 8000.00fd45fcf384 no eno1.30
vmbr70 8000.00fd45fcf384 no eno1.70
vnet001 8000.16d801586bf7 no veth147i0
veth159i0
vxlan_vnet001


so I also updated proxmox to the latest and greatest, and this issue disappeard with that so maybe something there? as you can see from the show command the interface is actually present at the moment.
 
bridge name bridge id STP enabled interfaces
vmbr0 8000.00fd45fcf384 no eno1
vmbr10 8000.00fd45fcf384 no eno1.10
vmbr30 8000.00fd45fcf384 no eno1.30
vmbr70 8000.00fd45fcf384 no eno1.70
vnet001 8000.16d801586bf7 no veth147i0
veth159i0
vxlan_vnet001


so I also updated proxmox to the latest and greatest, and this issue disappeard with that so maybe something there? as you can see from the show command the interface is actually present at the moment.
oh yes, it's possible, the last version from proxmox 6.4 have a lot of fix && new features.
 
one thing I was looking into, and not sure If i understand this correctly, but does proxmos allow to bridge a vxlan to a normal vlan? SImilar to how I believe NSX calls a VTEP? reason is I have physcial hardware i'd like to communicate to vm's on L2 that run on a vxlan?
 
one thing I was looking into, and not sure If i understand this correctly, but does proxmos allow to bridge a vxlan to a normal vlan?
technically, it's possible, but I don't have implemented it yet. (this can be done simply with "bridge_ports eth.X" in the vnet where the vxlan is running)

SImilar to how I believe NSX calls a VTEP? reason is I have physcial hardware i'd like to communicate to vm's on L2 that run on a vxlan?
the vtep is the vxlan interface

currently in /etc/network/interfaces.d/sdn ,we want something like

Code:
auto vxlan1000
iface vxlan 1000
  .....

auto vnetxxx
iface vnetxxx
   bridge_ports vxlan1000

so we should add "bridge_ports vxlan1000 eth0.X"


Note that you can't do it on multiple hosts, because you'll have a L2 loop.
(with physical switch supporting vxlan, it can be done with stacked/mlag switch, but AFAIK, they are not opensource mlag daemon for linux currently)
(Personnaly, I'm doing it with a mlag pair of arista switches)


Another way, without mlag, could be to have an HA vm with 2 interfaces, doing the bridge. (maybe pfsense).
I think that vmware nsx is doing it with a vm
https://docs.vmware.com/en/VMware-N...UID-ECE2893A-A1A6-4D43-93DA-AE4A97ABBF44.html
" The L2 bridge runs on the host that has the NSX DLR control virtual machine"

So I don't known yet what is the best way to implement it. I could implement it directly on proxmox host, but without any failover.
or implement it in a vm, but it'll take more time.
(or you can do it yourself with a vm or pfense https://docs.netgate.com/tnsr/en/latest/interfaces/types-vxlan.html)


Edit:
https://communities.vmware.com/t5/VMware-NSX-Discussions/NSX-VXLAN-Bridge-questions/td-p/460951

seem than vmware is doing it at bridge level, but only 1 host at 1time. We should implement a special daemon to manage the config failover. no so easy.
 
Last edited:
technically, it's possible, but I don't have implemented it yet. (this can be done simply with "bridge_ports eth.X" in the vnet where the vxlan is running)


the vtep is the vxlan interface

currently in /etc/network/interfaces.d/sdn ,we want something like

Code:
auto vxlan1000
iface vxlan 1000
  .....

auto vnetxxx
iface vnetxxx
   bridge_ports vxlan1000

so we should add "bridge_ports vxlan1000 eth0.X"


Note that you can't do it on multiple hosts, because you'll have a L2 loop.
(with physical switch supporting vxlan, it can be done with stacked/mlag switch, but AFAIK, they are not opensource mlag daemon for linux currently)
(Personnaly, I'm doing it with a mlag pair of arista switches)


Another way, without mlag, could be to have an HA vm with 2 interfaces, doing the bridge. (maybe pfsense).
I think that vmware nsx is doing it with a vm
https://docs.vmware.com/en/VMware-N...UID-ECE2893A-A1A6-4D43-93DA-AE4A97ABBF44.html
" The L2 bridge runs on the host that has the NSX DLR control virtual machine"

So I don't known yet what is the best way to implement it. I could implement it directly on proxmox host, but without any failover.
or implement it in a vm, but it'll take more time.
(or you can do it yourself with a vm or pfense https://docs.netgate.com/tnsr/en/latest/interfaces/types-vxlan.html)
Yeah I was wondering about the hardware support, I currently only have a cisco catalyst 3560-cx, which annoying doesn't support vtep. Maybe i need to look at Arista.

I actually had a look at tnsr for this like you proposed. It looks promising on the tin but didn't quite get it to work. I do have a lab I can use for extensive testing, so if you need to test this with multiple device feel free to reach out and i'll be happy to assist.
 
Yeah I was wondering about the hardware support, I currently only have a cisco catalyst 3560-cx, which annoying doesn't support vtep. Maybe i need to look at Arista.

I actually had a look at tnsr for this like you proposed. It looks promising on the tin but didn't quite get it to work. I do have a lab I can use for extensive testing, so if you need to test this with multiple device feel free to reach out and i'll be happy to assist.
if you want to do with a vm,
it can be done easily with a small debian vm for example.
add 1 interface in the vnet
add 1 interface in a vmbrX with an eth.X

and in the debian vm, create a bridge

Code:
auto eth0
iface eth0 inet manual

auto eth1
iface eth1 inet manual

auto bridge
iface bridge inet manual
    bridge-ports eth0 eth1
    bridge-stp off
    bridge-fd 0
 
@XelNaha
Thanks for the suggest about tnsr, I didn't known about it.
I have a futur project to implement some kind of gateway vms in the sdn, I'm currently looking vyos, maybe pfsense (but lack of api)
https://bugzilla.proxmox.com/show_bug.cgi?id=3382

The main idea is to be able to auto-configure them from sdn config, add nat/dns/dhcp/gateway/vxlan bridge/loadbalancer/... through api , vms or physical appliance.

But I don't think I'll be ready before next year.
 
if you want to do with a vm,
it can be done easily with a small debian vm for example.
add 1 interface in the vnet
add 1 interface in a vmbrX with an eth.X

and in the debian vm, create a bridge

Code:
auto eth0
iface eth0 inet manual

auto eth1
iface eth1 inet manual

auto bridge
iface bridge inet manual
    bridge-ports eth0 eth1
    bridge-stp off
    bridge-fd 0
interestingly, I updated the SDN config to include one of my other interfaces:

Code:
cat /etc/network/interfaces.d/sdn 
#version:10

auto vnet001
iface vnet001
        bridge_ports vxlan_vnet001 enp0s31f6.30
        bridge_stp off
        bridge_fd 0
        mtu 1450

auto vxlan_vnet001
iface vxlan_vnet001
        vxlan-id 1000
        vxlan_remoteip 10.20.10.254
        mtu 1450

but after making a network change on my host and applied it, the change was overwritten:

Code:
cat /etc/network/interfaces.d/sdn 
#version:10

auto vnet001
iface vnet001
        bridge_ports vxlan_vnet001 
        bridge_stp off
        bridge_fd 0
        mtu 1450

auto vxlan_vnet001
iface vxlan_vnet001
        vxlan-id 1000
        vxlan_remoteip 10.20.10.254
        mtu 1450

I wasn't expecting the SDN to be regenerated and overwritten. Is there a way to make this persistent?
 
Forgive my total noob question, but would this allow me to connect 4 nodes separated in 2 locations over WAN? If so what kind of tunnel or setup do you recommend?

I have:
- 1 datacenter with private 192.168.0.1 (mgmt) and a VLAN that has public routed IPs
- 1 home proxmox setup behind NAT and multiple VLANs internally at home

curious if I can finally get layer 2 routing between home and datacenter proxmox VMs/nodes.
 
Forgive my total noob question, but would this allow me to connect 4 nodes separated in 2 locations over WAN? If so what kind of tunnel or setup do you recommend?

I have:
- 1 datacenter with private 192.168.0.1 (mgmt) and a VLAN that has public routed IPs
- 1 home proxmox setup behind NAT and multiple VLANs internally at home

curious if I can finally get layer 2 routing between home and datacenter proxmox VMs/nodes.
hi Giovanni,

this is pretty much what VXLAN was made for (ish), to have l2 routed over an l3 underlay like what you describe. When you use the SDN in proxmox to create a vxlan vnet that should let you do this. Mind you however that this is L2 only and thus all L3 traffic still would need routing, possibly via the node in your DC.

its a bit hard to see how traffic would flow without a diagram and understanding of your setup, but in general terms it shouldn't be a problem, in fact i'm doing this right now with my cluster.
 
I wasn't expecting the SDN to be regenerated and overwritten. Is there a way to make this persistent?
yes, currently is always regenerated. (That's why I should add some kind of option to manage it)


but with ifupdown2, I you can can define same vnet in another file, like

/etc/network/interfaces.d/custom

Code:
auto vnet001
iface vnet001
        bridge_ports vxlan_vnet001 enp0s31f6.30
        bridge_stp off
        bridge_fd 0
        mtu 1450

or maybe, only

Code:
auto vnet001
iface vnet001
        bridge_ports enp0s31f6.30
        bridge_stp off
        bridge_fd 0

it should be merged by ifupdown2 without conflict
 
Forgive my total noob question, but would this allow me to connect 4 nodes separated in 2 locations over WAN? If so what kind of tunnel or setup do you recommend?

I have:
- 1 datacenter with private 192.168.0.1 (mgmt) and a VLAN that has public routed IPs
- 1 home proxmox setup behind NAT and multiple VLANs internally at home

curious if I can finally get layer 2 routing between home and datacenter proxmox VMs/nodes.
it should work with vxlan, but you need at least 1 public on each host to establish the tunnels.
(for nat, I don't known if it's work, vxlan use port udp/4789)
 
So I'm doing some testing with this now. What I haven't been able to figure out is how i can configure my evpn zone to allow subnets to reach the internet. I tried to setup bgp between my proxmox cluster and my ha opnsense router so that opnsense could see and route between the sdn and the internet, but I'm new to all of this and could use some direction.
 
So I'm doing some testing with this now. What I haven't been able to figure out is how i can configure my evpn zone to allow subnets to reach the internet. I tried to setup bgp between my proxmox cluster and my ha opnsense router so that opnsense could see and route between the sdn and the internet, but I'm new to all of this and could use some direction.
Hi, you need to have an evpn exit gateway somewhere.
pfsense/opnsense can't do evpn currently (personnaly I'm doing it with physical arista router or cumulus linux switchs),
but you can define proxmox nodes as exit gateway in the zone configuration.

if you define a proxmox node as exit gateway, it'll route traffic between the evpn network and his default gw. (which could be your opnsense, then a static route on opnsense to the proxmox node in the reverse way).

but It could be possible to use bgp, I have added an option in last sdn to define bgp controller for specific promox node.

Code:
vm------->vm  local proxmox node-anycast gateway-------->proxmox exit node-------bgp/or default gw ----------->opnsense
opensense---bgp/or static route to evpn network------>proxmox exit node-------->vm  local proxmox node-anycast gateway---------->vm


can you send your /etc/pve/sdn/* config files ?



something like this should works:

Code:
zones.cfg
---------
evpn: evpn
    controller evpnctl
    vrf-vxlan 1000
    exitnodes proxmoxnode1
    ipam pve
    mac 5E:56:FA:59:CC:CB

controllers.cfg
----------------

evpn: evpnctl
    asn 65000
    peers <ip of proxmoxnode1>,<ip of proxmoxnode12,<ip of proxmoxnode3>

bgp: bgpproxmoxnode1
    asn 65000
    node proxmoxnode1
    peers <ip of pfsense>



vnets.cfg
---------
vnet: vnet1
      zone evpn
      
subnets.cfg
-------
subnet: vxlanzon-192.168.0.0-24
    vnet vnet1
    gateway 192.168.0.1
 
Last edited:
I was looking at deploying BGP EVPN on MS SONiC loaded onto white box switches. Then, I came across this notice and then this thread for SDN on Proxmox. I'm a newbie to SDN. How would one go about tying in Proxmox SDN deployment with physical switches running MS SONiC and BGP EVPN? I'm looking more for guidance/mechanism than actual device configuration. I'm pretty sure I can figure out the actual device config pretty quickly. However, actual config examples wouldn't be turned down.
 
I was looking at deploying BGP EVPN on MS SONiC loaded onto white box switches. Then, I came across this notice and then this thread for SDN on Proxmox. I'm a newbie to SDN. How would one go about tying in Proxmox SDN deployment with physical switches running MS SONiC and BGP EVPN? I'm looking more for guidance/mechanism than actual device configuration. I'm pretty sure I can figure out the actual device config pretty quickly. However, actual config examples wouldn't be turned down.
Hi, seem to sonic use frr for managing evpn so it should works without any problem
https://dc-networks.net/category/vxlan-evpn/sonic/

to do evpn, you don't need any evpn support on physical switch/routers, you can do it between proxmox nodes only with simple switches.

but it can be great to use physical switch as exit gateway from evpn to physical network .(if you need to route evpn network outside of course)

It's really depend of your usecase and needs.
 
Hi, you need to have an evpn exit gateway somewhere.
pfsense/opnsense can't do evpn currently (personnaly I'm doing it with physical arista router or cumulus linux switchs),
but you can define proxmox nodes as exit gateway in the zone configuration.

if you define a proxmox node as exit gateway, it'll route traffic between the evpn network and his default gw. (which could be your opnsense, then a static route on opnsense to the proxmox node in the reverse way).

but It could be possible to use bgp, I have added an option in last sdn to define bgp controller for specific promox node.

Code:
vm------->vm  local proxmox node-anycast gateway-------->proxmox exit node-------bgp/or default gw ----------->opnsense
opensense---bgp/or static route to evpn network------>proxmox exit node-------->vm  local proxmox node-anycast gateway---------->vm


can you send your /etc/pve/sdn/* config files ?



something like this should works:

Code:
zones.cfg
---------
evpn: evpn
    controller evpnctl
    vrf-vxlan 1000
    exitnodes proxmoxnode1
    ipam pve
    mac 5E:56:FA:59:CC:CB

controllers.cfg
----------------

evpn: evpnctl
    asn 65000
    peers <ip of proxmoxnode1>,<ip of proxmoxnode12,<ip of proxmoxnode3>

bgp: bgpproxmoxnode1
    asn 65000
    node proxmoxnode1
    peers <ip of pfsense>



vnets.cfg
---------
vnet: vnet1
      zone evpn
     
subnets.cfg
-------
subnet: vxlanzon-192.168.0.0-24
    vnet vnet1
    gateway 192.168.0.1

Thanks spirit.

Here are my config files.
Code:
zones.cfg
---------
evpn: public
        controller c0
        vrf-vxlan 10000
        exitnodes pve02
        ipam pve
        mac E6:F3:66:52:BB:51
        mtu 8900

controllers.cfg
----------------
evpn: c0
        asn 65000
        peers 192.168.128.1 192.168.128.2 192.168.128.3

bgp: bgppve02
        asn 65000
        node pve02
        peers 192.168.10.250
        ebgp 1

vnets.cfg
---------
vnet: customer
        zone public
        tag 12000
     
subnets.cfg
-------
subnet: public-192.168.130.0-24
        vnet customer
        gateway 192.168.130.1

On OPNsense, I have BGP setup with the AS number being 65100 and I have added bgppve02 has a neighbor. Neither proxmox or opnsense will exchange routes unless I tick ebgp.

How would I set this up with static routes, and how might I otherwise setup bgp?
 
On OPNsense, I have BGP setup with the AS number being 65100 and I have added bgppve02 has a neighbor. Neither proxmox or opnsense will exchange routes unless I tick ebgp.
seem to be normal, as you use AS65000 for promxox, AS65100 for opnsense, so it's ebgp.

your sdn config seem to be correct, can you send the generated /etc/frr/frr.conf file on pve02 ?
Maybe something is missing to announce routes. (I don't have tested too much this part, I don't remember if it was working fne)
 
Last edited:
seem to be normal, as you use AS65000 for promxox, AS65100 for opnsense, so it's ebgp.

your sdn config seem to be correct, can you send the generated /etc/frr/frr.conf file on pve02 ?
Maybe something is missing to announce routes. (I don't have tested too much this part, I don't remember if it was working fne)
Code:
log syslog informational
ip forwarding
ipv6 forwarding
frr defaults datacenter
service integrated-vtysh-config
hostname pve02
!
!
vrf vrf_public
 vni 10000
exit-vrf
!
router bgp 65000
 bgp router-id 192.168.10.2
 no bgp default ipv4-unicast
 coalesce-time 1000
 bgp network import-check
 no bgp ebgp-requires-policy
 neighbor BGP peer-group
 neighbor BGP remote-as external
 neighbor BGP bfd
 neighbor 192.168.10.250 peer-group BGP
 neighbor VTEP peer-group
 neighbor VTEP remote-as 65000
 neighbor VTEP bfd
 neighbor 192.168.128.1 peer-group VTEP
 neighbor 192.168.128.3 peer-group VTEP
 !
 address-family ipv4 unicast
  neighbor BGP activate
  neighbor BGP soft-reconfiguration inbound
  import vrf vrf_public
 exit-address-family
 !
 address-family ipv6 unicast
  import vrf vrf_public
 exit-address-family
 !
 address-family l2vpn evpn
  neighbor VTEP activate
  advertise-all-vni
 exit-address-family
!
router bgp 65000 vrf vrf_public
 !
 address-family ipv4 unicast
  redistribute connected
 exit-address-family
 !
 address-family ipv6 unicast
  redistribute connected
 exit-address-family
 !
 address-family l2vpn evpn
  default-originate ipv4
  default-originate ipv6
 exit-address-family
!
line vty
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!