Networking SDN Roadmap

gsmitheidw

Member
Jan 28, 2020
13
5
23
Republic of Ireland
We have been using a mixture of traditional Linux bridges as well as OpenVSwitch.

Looking at the roadmap, I can understand a desire for stability but OpenVswitch is now well established, it's even available in the GUI now (and usable once
apt install openvswitch-switch). It's also "under the hood" of a lot of off-the-shelf managed physical switches now . So it's pretty stable.

OpenVSwitch has many more features and I can't see any significant negatives, so my question is why not make it the default for the future versions of Proxmox?
Or maybe have that somewhere in the roadmap?

It would be nice to be able to set it up directly from the installer too. Obviously anybody running larger scale standardised deployments will have kickstart scripts and ansible etc.
 
OpenVSwitch has many more features and I can't see any significant negatives
Well, we have made some bad experiences with it over the years and often using OVS had very few benefits. Could you point out some of these features? That might help with understanding where you are coming from.
 
OpenFlow support for SDN is a significant one. Connecting to LACP bonds to utilise all the available bandwidth of multiple channels (better hashing options basically). Tunneling and Encapsulation with VXLAN, QoS capabilities, more advanced security ACLs and security groups.

Also I think OpenVSwitch is a better choice because:

1. If people are using the GUI they're not going to be doing anything very advanced so it should not add to any potential support issues

2. If it's the default, it's easier for support because everybody is using one networking method regardless of their complexity of environment.

3. If they need to change from a simple setup to something more advanced or scale up, if openvswitch is already there then there is an easier
migration path.

We've not had any significant issues but maybe because we're later to this than some maybe some of the issues in ovs have been resolved now?
 
I'm working with a proxmox user to add support of ovs ovn.


But Personnally, I think that openflow is already dead 5~8 years ago. (Almost Every commercial openflow controller is dead )

Do you use a specific openflow controller in production ?
 
I'm working with a proxmox user to add support of ovs ovn.


But Personnally, I think that openflow is already dead 5~8 years ago. (Almost Every commercial openflow controller is dead )

Do you use a specific openflow controller in production ?
We'll probably use Faucet or something open/free. Lack of commercial vendors isn't something that phases us. Our needs with openflow somewhat niche and involved in Education and we're more "build" than "buy" - so I probably shouldn't have lead with OpenFlow, probably more significant for us that others. The design ideas are great even if uptake has been a bit disappointing.

But there's lots of other new networking things happening many of the physical switches are ONIE with SONiC starting to take a footing and p4.org - all this SDN stuff isn't going away and a traditional Linux bridge just doesn't really do enough these days.

The other OpenVswitch features are probably more significant - in terms of performance it's more efficient in terms of load balancing and traffic aggregation (esp in active-active mode across multiple parallel links) and faster failover handling. It's also more scalable. These features are things you'd particularly want in a cluster environment. Especially where high availability is important as well as latency critical services. Or shared storage (I'm not too knowledgeable on ceph but I'd say it would hugely benefit from these things in a cluster)
 
The main problem currently is that ipam is not fully implemented yet, and openflow controller like faucet need to manually register ips at vm start.

That's why evpn is implemented currently, dynamic learning of ip/mac , standard interopability with physical switchs (arista, cisco, ...), anycast gateway, distributed controllers. (The more important is that it's a standard for controlplane. Not like openflow where every controller is different and not compatiblt).

l3 network with ecmp balancing is working too. (I'm using it in production), an it could be possible to add qos/link priority in the future.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!