linux bridge vs ovs Bridge

I bet that in 99.9% you will be happy with the regular linux bridge.

Can you tell a bit more what you actually need?
 
REason for Using OpenVSwitch: fancy VLAN support :)

ie. I have VLAN 666 that is the native-untagged VLAN on the vRack/BAckEnd. So the VMs that needs Interne access I bind to Vlan 666, while the rest communicates on their own VMs.

I have the need to map another interface, to another vlan on another host. that physical interface was originally bound to a linux bridge and I didn't want to disturb it as yet. I used a veth link between the linux bridge and the open vswitch. the port on the open vSwitch was assigned a vlan that was trunked via the native-untagged above on another vlan to the remote host's vlan.

*lately* Linux Brdiges have better VLAN 802.1q tagging support, but that wasn't the case long time ago, with only OpenVSwitch being the "real" alternative for vlans and trunking in the same virtual switch.

I still have some setup I haven't migrated yet to vlans, where I have multiple Linux bridges to simulate vlans.
 
*lately* Linux Brdiges have better VLAN 802.1q tagging support, but that wasn't the case long time ago, with only OpenVSwitch being the "real" alternative for vlans and trunking in the same virtual switch.

well, vlan support exist since kernel 3.8, so 2013 ;)


the only advantage of ovs could be dpdk , but it's not supported by proxmox currently.
Maybe netflow,sflow support too (but can be done with external daemon with linux bridge too)

you can do vlan, qinq, vxlan, bgp evpn, gre tunnel, ipip tunnel,... with linux bridge without any problem.

(Cumulus linux has done a great job to implement all the things this last years, as they use linux bridge for their switch os)
 
For somebody that have worked on/with Linux since kernel 0.92 and used bridge since ~2000, 3.8 is quite "new" ;)
yes me too;) But ovs is not old too (~ 2010) if you compare about the vlan feature.


Personally, I still prefer linux bridge than ovs. (if ovs deamon is killed or oom, you don't have network anymore :/ . and I have see a lot of bug report recently with packet drop and strange stack traces)
 
I have been thinking about moving from Linux Bridges to OVS. The reason that I wanted to move to OVS is that I have decided that I wanted to virtualize pfSense and I make use of about 12 VLANs and it seems to easier to work with VLANS under OVS as Linux bridges. Also since I am using PM with a single interface (laptop), I have even more need of VLANs.
 
I have been thinking about moving from Linux Bridges to OVS. The reason that I wanted to move to OVS is that I have decided that I wanted to virtualize pfSense and I make use of about 12 VLANs and it seems to easier to work with VLANS under OVS as Linux bridges. Also since I am using PM with a single interface (laptop), I have even more need of VLANs.
just enable vlan-aware option on bridge ?
 
by default, linux bridge don't support vlan.
if you enable the option "vlan aware" , in the gui "host->system->network->youbridge options,
the bridge will transport the vlans.(like OVS) (and than you can create vlan interfaces in your pfsense)
 
  • Like
Reactions: maleko48
by default, linux bridge don't support vlan.
if you enable the option "vlan aware" , in the gui "host->system->network->youbridge options,
the bridge will transport the vlans.(like OVS) (and than you can create vlan interfaces in your pfsense)
So would I still have to create entries in /etc/network/interface file? For instance, I am pretty clear that I need to create an interface for pfsense VMBR1 [WAN] VMBR2 [LAN] and VMBR3 [OPT]), so I would create that in /etc/network/interface. Now under pfSense, I create VLAN 5, 10, 20 30 40, etc under the OPT interface., do I need to added entries in the /etc/network/interface or would I be good to go just worrying about creating VLANs only in pfSense.
 
yes me too;) But ovs is not old too (~ 2010) if you compare about the vlan feature.
still younger than OVS's vlan ;)
Personally, I still prefer linux bridge than ovs. (if ovs deamon is killed or oom, you don't have network anymore :/ . and I have see a lot of bug report recently with packet drop and strange stack traces)
If the OOM kicked in, then you are screwed on the Hypervisor in anycase. The OOM is well.... let's say I have a preference for proper OS's like Solaris and not debate that part.

That said: Which ever rocks your dingy :)

FOr me, OVS just "worked" way back since before ProxMox moved away from the RedHAt/Centos kernels when they still had OpenVZ (Yet another NIHS issue w.r.t. Linux Kernel devs to move to LXCs) which was I recall a 2.4/2.6 kernel, thus still before 3.8 when I started to use OVS extensively.


I have been thinking about moving from Linux Bridges to OVS. The reason that I wanted to move to OVS is that I have decided that I wanted to virtualize pfSense and I make use of about 12 VLANs and it seems to easier to work with VLANS under OVS as Linux bridges. Also since I am using PM with a single interface (laptop), I have even more need of VLANs.

So, yes the typical would be something like:

create the OVS bridge, "delete" the physical's config, and attached that physical as an OVSPort to the bridge.
Then you add a OVSintPort and put the IP of the proxmox on that, and attach that to thebridge (I has a bit complexer setup on my "production" servers where I have the physical as a native-untagged with a VLAN tag like 666, then I attach the OVSintPort to that.)


For OPNsense/pfSense:
1) use VirtIO and not e1000 - I've had lock ups on those e1000 in FreeBSD ;(
2) disable the hardware acceleration - it breaks UDP and thus DHCP

just when you attach the OPNsense/pfSense interface, attach it as a tagless, then OVS will "default" to trunk capable, and you can create the vlan interfaces inside the opnsense/pfSense interface.
The VMs themselves you add the relevant VLAN tag.

The screenshot of my lab/test server's network setup inside ProxMox: Screenshot 2020-03-05 23.46.27.png
 
  • Like
Reactions: maleko48
I'm just in the middle of trying to implement OVS after reinstalling my 4 node test cluster. The production cluster will be for a small ISP that has a bunch of vlans. I'm used to working with Cisco switches and was hoping to create a 10G trunk through all the nodes and starting/finishing on one of the big edge switches. The regular bridge mode (I'm probably reading old tutorials and forum posts) seemed to require that every vlan get explicitly defined on a port so when I found out about OVS I thought it made sense to set up the cluster using OVS in each node to form the 10G loop and then the virtual machines can all pick up a connection off that loop once it's in place with all VLANs allowed to flow over it.

Prior to my reinstall I did get a vlan working on one container but I could see that implementing different vlans for all the different services that would someday be required was going to be a pain IMO.

Now that I have OVS installed on new nodes I'm just seeing a new field "OVS options"...I was hoping for some GUI implementation that would let you list the vlans and make one a native vlan if required.

I've been looking for an ideal tutorial of some sort using proxmox, OVS and ceph. Any recommendations?
 
The regular bridge mode (I'm probably reading old tutorials and forum posts) seemed to require that every vlan get explicitly defined on a port

Seems with the "new improved" Linux Bridge, that is not needed anymore...

Now that I have OVS installed on new nodes I'm just seeing a new field "OVS options"...I was hoping for some GUI implementation that would let you list the vlans and make one a native vlan if required.

I've been looking for an ideal tutorial of some sort using proxmox, OVS and ceph. Any recommendations?

The only things I've been putting in the "OVS options" field is things like vlan_mode=native-untagged
For the "trunk" ports, the native is the Tag field, and to make it trunk, you'll need to add a vlan_mode, else a tagged interface is an access interface (mapped to the Vlan)
 
Just wondering would OVS allow for port isolation, something which as far as I can tell is not possible with standard bridges (if I understand this topic correctly - https://forum.proxmox.com/threads/vm-client-isolation.49025/ )?

The reason "port isolation" may be preferable to enabling a firewall on the proxmox level as I see it is that like this traffic would flow through the "main" firewall and policies would be set in a single location (though I also totally see the argument that all devices should have their own firewall running but otoh that also greatly complicates troubleshooting if you can have a firewall issue inside the vm, outside and at the network level).

/Edit - (And sorry for rebooting this old discussion, I was looking for exactly this topic and figured I'd ask about this distinction)
 
Last edited:
My "port isolation" is to put each client/stack in their/it's separate VLAN, and then have a single firewall managing the traffic accordingly. Then the OpenVSwitch shares/connects to an interface that is 802.1q trunking to the other OpenVSwitches on the other cluster members, and that way I can "seamlessly" migrate VMs between ProxMox cluster members without worrying about network issues.
 
  • Like
Reactions: blackjad
That said, the OVS world, has SDN (Software Defined Networking) features where you can (using a "controller" of sorts) create rules on how to switch traffic based on various criteria, thus the notion of port isolation could well be implemented, but not with the same ease (and here ease includes the initial setup and configuration and learning to get that working... especially across OPenVSwitch bridges etc. etc.) as simple VLANs ... unless you have much more than 4000 clients/VMs to isolated
 
The Proxmox VE SDN Technology Preview, which was developed by @spirit, also supports setting up zones and for more complex networks also (BGP) controllers:

https://pve.proxmox.com/pve-docs/chapter-pvesdn.html

As said, still a tech preview, but available since a while now, and we know some (experienced) admins that are using it already in production.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!