proxmox 7.0 sdn beta test

spirit

Distinguished Member
Apr 2, 2010
6,768
957
243
www.groupe-cyllene.com
Code:
Updated proxmox 7.0:
Minimum packages version (available in no-subscription-repo)


libpve-network-perl_0.6.1
ifupdown2_3.1.0-1+pmx3
pve-manager_7.0-10


-a lot of fix everywhere , so please update before reporting bugs ;)


Hi,

proxmox 7.0 include a new sdn (software delivery network) feature, it's beta for now.

I'm the main author of this feature, and I would like to have some feedback of community to improve it.

Doc is here:

https://pve.proxmox.com/pve-docs/chapter-pvesdn.html


The main idea, to to defined virtual network at datacenter level.
The more simple example, is a vlan network. Instead of defined the vlan tag on the vm nic, we define the network at datacenter level.
This will allow to define permissions of this network. (like for a storage).

The sdn feature use a plugins, so it can be extended easily.

Currently, it's support

layer2 network
---------------------
vlan,qinq (stack vlan), vxlan

layer3 network
---------------------
vxlan+bgp-evpn , simple brige


bgp-evpn network is the most complex and true sdn network. it's need a controlller (it's using frr routing software), to manage the flow of the bridge.
It's allowing anycast routing across different vxlan network. (each proxmox host have same ip for each bgp-evpn network, and are the gateway of the vm/ct).

I think it could help too users on public servers like ovh,hetzner with different public subnet/failover ips. (you could easily diffined 1virtual network by subnet).


If users need some other sdn plugins, I'll could look to implement them in the future. (but first, I would like to have 0 bugs on current plugins)

If you have time to test it, and give some feedback on this thread, it could be wonderfull.

Thanks !


Alexandre


You can also contact me directly by email : aderumier@odiso.com



Feedback/Need to be fixed:

Gui: the vlan field on the vm/ct nic should be grey-out when a sdn vnet is choose. (keep it empty for now)
 
Last edited:
Bonjour Alexandre and thanks for your work.

Two questions:
1. How does this work with already configured OVS Ports and VLANs? I guess it's not recommended to activate this SDN over OVS, or what's your opinion?

2. In the past I had some troubles when installing ifupdown2, but it's too long ago to remember. Is there something "bad" to expect? :)

regards
 
Bonjour Alexandre and thanks for your work.

Two questions:
1. How does this work with already configured OVS Ports and VLANs? I guess it's not recommended to activate this SDN over OVS, or what's your opinion?

yes, it should works. I have thinked about this.

For vlan plugin, you need to defined an existing local ovs or linux bridge.
It's more like an abstraction to avoid to define vlan on vm/ct nic.


(Other plugins like vxlan really create new bridges, as no other implementation exist)

2. In the past I had some troubles when installing ifupdown2, but it's too long ago to remember. Is there something "bad" to expect? :)

It's depend when "in the past" ;) Specifically, for ovs, I have wrote a native ifupdown2 ovs plugin some months ago, so It should work without any problem.
Also, a lot of fix have been done, so I think it's ok now. (I don't have see bug report on the forum since 2months)

But of course, please test it first on a non production server
 
  • Like
Reactions: Mike Lupe
@spirit @t.lamprecht ifupdown2: ok, thanks.

Alexandre, could you elaborate a bit further how this would happen? Or shall we rather wait for the documenation to contain this part? Unfortunately I haven't got a test-cluster right now to test SDN "over" (or in parallel to) already configured OVS VLANs. Don't ask why I got on with OVS for VLANs - at that time I thought it was a good idea ;)

regards
 
I haven't got a test-cluster right now to test SDN "over" (or in parallel to) already configured OVS VLANs. Don't ask why I got on with OVS for VLANs - at that time I thought it was a good idea

A lot of that could be tested also in a virtual test cluster, as a first step. That's how we circumvent giving every dev their own full blown >= 3 node physical cluster here all the time :) Nested virtualization makes even level-2 guests possible.
 
A lot of that could be tested also in a virtual test cluster, as a first step. That's how we circumvent giving every dev their own full blown >= 3 node physical cluster here all the time :) Nested virtualization makes even level-2 guests possible.

I know, I know... But it's not only setting up the virtual cluster (which is on my bucket list for a quite long time), but it's also the configuration of a plausible OVS VLAN environment, on top of whom I could test the SDN implementation :)

edit: just found an old virtual cluster (5.x)
 
Last edited:
@spirit @t.lamprecht ifupdown2: ok, thanks.

Alexandre, could you elaborate a bit further how this would happen? Or shall we rather wait for the documenation to contain this part? Unfortunately I haven't got a test-cluster right now to test SDN "over" (or in parallel to) already configured OVS VLANs. Don't ask why I got on with OVS for VLANs - at that time I thought it was a good idea ;)

regards

Well this is pretty simply,

sdn vlan + ovs (or linux bridge with vlanaware), don't create any new bridge or don't change the /etc/network/interfaces configuration.
(Currently, ifupdown2 is mandatory because other plugins need reload, but not here with vlan+ovs).

Proxmox classic vlan management, is to setup vlan on vm/ct interface.
When the vm is started, proxmox look at this vlan tag in vm config, and set the tag in ovs switch port.

Now, with sdn , this is eactly the same, but the vlan is defined on the virtualnetwork.
when the vm is stard, proxmox look at the vlan tag in the virtualnetwork config, et set the tag in ovs switch port.
 
@spirit Thank you, this makes kind of sense - so configuration "in parallel" to existing VLAN configs should work, if I understand it right.

Hopefully I'll find some time to upgrade my old virtual 3-node cluster from 5.x to 6.x (@t.lamprecht :) ) , so I'll be able to test.
 
Thanks @spirit for you work on the "Software Defined Network" module.

On the current proposed plugin I would suggest:
- At least for the VXLAN part, autogenerating internal IDs for the zones and networks, and adding a "friendly" name only visible to the tenant/user that has access to them, probably hiding regular users' VXLANs from each other (global admin should see them all). The idea would be to support multitenancy (I have a VXLAN named LAN, and you can too!).
- Having multitenancy at the L3 layer would need different VRFs support.

Future feature wishlist :D :
- IPSEC VPN (L3/L2)
- L4 load balancing
- MPLS VPN (L3/L2)
- SNAT

Hope to find the resources to install it and report back!
 
- At least for the VXLAN part, autogenerating internal IDs for the zones and networks, and adding a "friendly" name only visible to the tenant/user that has access to them, probably hiding regular users' VXLANs from each other (global admin should see them all). The idea would be to support multitenancy (I have a VXLAN named LAN, and you can too!).
Yes, for multitenancy, currently it's possible to add permissions on a zone, to restrict the view of the vnets of a zone for a specific users.
Currently I only display the vnetid in vm/ct nic bridge list, but I have already a friendly name field (not limited in characted size), I could add it in the gui.
I think it's miss display filtering at datacenter level in gui currently, so all users will see the sdn section. (I need to polished that)

What I'm not sure, is that if we want to give permission to a user, to create his vxlan own network in a zone ? (because this need network configuration reload on all hosts).
or If only superadmin create the vxlan for the user?


- Having multitenancy at the L3 layer would need different VRFs support.

This is already done with bgp-evpn plugin, we have a different vrf for each zone.
(The vrf name is the zone name)

- L4 load balancing
maybe with keepalived ? (for L3 routed network only)

- IPSEC VPN (L3/L2)
- MPLS VPN (L3/L2)
for L2, I was planning to add more tunneling option. (ifupdown2 support gre, ipinip,....)
for mpls, I need to look at frr documentation. (I never setup mpls network, need to read that).
for ipsec, maybe with strongswan ?

for L3, currently I only support bgp-evpn, because it's working accross multi-nodes.
but maybe I could implemented a simple l3 plugin restricted to 1 node ?

could be easy to implemented, could be enable/disable for each subnet/vnet too.
(for L3 routed network only)

Hope to find the resources to install it and report back!

For theses features, I'll be later, after that the main network plugins are stable.
(I would like to add dhcp too)


Thanks !
 
I'm the main author of this feature, and I would like to have some feedback of community to improve it.

Thank you very much for your contribution. I am waiting to test this out this week.

Question: Can you mix different zone technologies? I mean can I keep a default setup for the machines that are already working and add a zone with BGP to test it out without impacting the rest of the VMs? Can you have VLAN and VxLAN simultaneously?

Thank you,
Rares
 
Question: Can you mix different zone technologies?
yes sure
I mean can I keep a default setup for the machines that are already working and add a zone with BGP to test it out without impacting the rest of the VMs?
yes no problem.
Can you have VLAN and VxLAN simultaneously?
yes, vxlan create new bridge, without any physical interfaces in this bridge. (only a vxlan interface)


I think the only 2 kind of zone you can't use together, is vlan - qinq, on the same bridge. (because qinq is stacked vlans).
 
It's depend when "in the past" ;) Specifically, for ovs, I have wrote a native ifupdown2 ovs plugin some months ago, so It should work without any problem.
Hi Spirit
Can you explain more about the ifupdown2 ovs plugin? I also make use of OVS with bond interfaces and if I install ifupdown2 and apply config my running VM's loose network and I have to shut them down and boot up again before they have network.
 
Hi Spirit
Can you explain more about the ifupdown2 ovs plugin? I also make use of OVS with bond interfaces and if I install ifupdown2 and apply config my running VM's loose network and I have to shut them down and boot up again before they have network.
#pveversion -v ?

it should be ok with ifupdown2 2.0.1-1+pve8, and /etc/network/interfaces changes without "allow ..." interfaces. (last ifupdown2 package make the change for you).

The problem if previous ifupdown2 version, is that for ovs, it was still using old bash network ifupdown1 script for ovs, and they where race condition.
I have wrote a native ovs plugin for ifupdown2 in python (it's now upstream https://github.com/CumulusNetworks/ifupdown2/commit/213d8a409d7b00107621f1ad646481f106feb7b3) to avoid theses problems with ovs.

if you still have problem ifupdown2 2.0.1-1+pve8, + ovs, tell me !
 
  • Like
Reactions: pieteras.meyer
#pveversion -v ?

if you still have problem ifupdown2 2.0.1-1+pve8, + ovs, tell me !

Thanks Alexandre, ifupdown2 2.0.1-1+pve8 seems to work as intended - I just installed it and made some changes, applying without rebooting works.

How big are the risks of bringing down the cluster, if trying out SDN in a lab environment?
 
Thanks Alexandre, ifupdown2 2.0.1-1+pve8 seems to work as intended - I just installed it and made some changes, applying without rebooting works.

How big are the risks of bringing down the cluster, if trying out SDN in a lab environment?

Chances to bring down the full cluster are really miniminal. (I'm working on sdn since 1year now).
(But murphy law....).

You just need to test that ifupdown2 + reload works fine first with your local network configuration.
If it's ok, I don't see a case where you could bring down the cluster with the sdn configuration.
 
  • Like
Reactions: Mike Lupe
Chances to bring down the full cluster are really miniminal. (I'm working on sdn since 1year now).
(But murphy law....).

Thanks, it works. Three LXCs on three cluster-nodes on a new SDN virtual VLAN-bridge in parallel to a "traditionally" configured OVS VLAN device, really easy as you said - and no Murphy yet, great job!
(*EDIT*: I had to reboot all already running VMs and LXCs to regain net connectivity for those)

Would it make sense to be able to configure multiple VLANS on a single Vnet, or would this create chaos if not properly managed? :)
 
Last edited:
Some feature request for the UI:

- Do not allow Vnet to be deleted if in use, analog to not being able to delete Zones, if in use by a Vnet.
 
Hi Spirit

thanks for the reply, see below
#pveversion -v ?
pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.34-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-1
pve-kernel-helper: 6.2-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 2.0.1-1+pve8
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.3
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-network-perl: 0.4-4
libpve-storage-perl: 6.1-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-5
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1


The problem if previous ifupdown2 version, is that for ovs, it was still using old bash network ifupdown1 script for ovs, and they where race condition.
I have wrote a native ovs plugin for ifupdown2 in python (it's now upstream https://github.com/CumulusNetworks/ifupdown2/commit/213d8a409d7b00107621f1ad646481f106feb7b3) to avoid theses problems with ovs.
Do I need to install the python script? ifupdown2/addons/openvswitch.py

What I see is if I add or make changes and use the apply configuration in the Webgui all running VM loose network connection, and need to shutdown and restart the vm to restore network,
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!