Proxmox+Openvswitch VLANS+Ebtables

zervin

New Member
Dec 17, 2012
5
0
1
I have been experimenting with Proxmox and Openvswitch, and I think it is worth a second look from the dev team. The biggest tripping point at the moment, is that there is no persistant name for the vif that gets activated. Without persistence, it makes it a little bit more cumbersome to spin up prebuilt bridges for vlans. That being said, by utilizing openvswitch, proxmox could achieve vlan network isolation without ever needed to touch/sync a switch configuration. The vlans can easily be synced across devices, and it would also be possible to build pseudo distributed switches spanning multiple devices.

We have been using proxmox for a couple of years now, and of all the solutions available, nothing comes close to being a easy to deploy and maintain as Proxmox. The one thing we find ourselves lacking is a consistent method of network isolation. Blind bridges are OK, but only allow traffic within a single device. Traffic on a single device is OK until you have a failure or heavy load, and everything goes down. We want to be able to manage the isolation from the proxmox servers themselves, versus needing to manage the switch side and do the vlan tango. Openvswitch and so some degree +ebtables would be a nice way to tackle network isolation within proxmox.

It would be nice to hear what the dev team has in mind, and if they have considered this type of configuration. It would really be a special feather in the Proxmox cap to hand these types of configurations.
 
It would be nice to hear what the dev team has in mind, and if they have considered this type of configuration.

You can already manage VLANs one the GUI - so what do you miss exactly? Second, openvswitch does not work well with iptables, so that is a show stopper for me.
 
You can already manage VLANs one the GUI - so what do you miss exactly? Second, openvswitch does not work well with iptables, so that is a show stopper for me.

You can manage vlans in the gui "after setting up vlans on a managed switch" is how I assumed it worked. Does the Proxmox vlan tagging work for isolation without a complementing hardware setup? With oenvswitch, you could eliminate the need for a hardware vlan setup at all. It also means that in a switch replacement, you wouldn't have a conf replacement as well, any dumb switch would work fine.

As for iptables, I think that sends us back to the please give us a new KVM only kernel. I assume the iptables support is for managing OpenVZ stuff, which seems like a perpetual hack and sour spot. I think the reality of the situation, is the current path for OpenVZ is not healthy or sustainable, but the alternatives are't much better. A split kernel needs to be an option. For the record, I was able to get openvswitch working with the 3.2.0 packages in debian squeese backports.
 
You can manage vlans in the gui "after setting up vlans on a managed switch" is how I assumed it worked. Does the Proxmox vlan tagging work for isolation without a complementing hardware setup? With oenvswitch, you could eliminate the need for a hardware vlan setup at all. It also means that in a switch replacement, you wouldn't have a conf replacement as well, any dumb switch would work fine.

I am shiny knew to serious consideration of using proxmox in production, but networking flexibility is what I am still trying to fully understand .. I have installed proxmox and can see it comes out of the box with linux bridge .. I have also heard of folks touting vde2 as possible with proxmox ..

But my present thinking/preference would be to be able to use openvswitch with proxmox .. So, I have been researching how possible this would be. This forum post hits the nail on the head for me .. and immediately helps me understand that there is no current straight-forward support for openvswitch in proxmox.

As for iptables, I think that sends us back to the please give us a new KVM only kernel. I assume the iptables support is for managing OpenVZ stuff, which seems like a perpetual hack and sour spot. I think the reality of the situation, is the current path for OpenVZ is not healthy or sustainable, but the alternatives are't much better. A split kernel needs to be an option. For the record, I was able to get openvswitch working with the 3.2.0 packages in debian squeese backports.

Part of the appeal proxmox holds for me now (over a straight KVM solution or virtualbox or even openstack) is the OpenVZ support .. So, while I support the idea of openvswitch integration, I would still want to be able to use openvz containers .. Is this a contradiction? Does split kernel mean being able to support both sides of the coin via configuration options? If so, I would vote for that.

Zervin, I really would like to learn more about how you got openvswitch working in proxmox .. would also want to understand if the only way to get it working is by avoiding the use of openvz containers

Thanks in advance for any information you can add here .. You have made some logical case for consideration, and I hope the proxmox management/developers give it some consideration.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!