Proxmox 5.0 and OVS with 'dot1q-tunnel'

Jun 8, 2016
344
69
68
47
Johannesburg, South Africa
I'm extremely happy to see Proxmox using Debian and that Debian Stretch will feature kernel 4.10. This includes recently merged QinQ VLAN support which should accelerate OVS substantially but, more importantly to us, provides the necessary kernel support for 'dot1q-tunneling'.

Cisco switches have a feature to encapsulate both untagged and tagged frames within another VLAN on ingress, this is accomplished by setting a port to 'dot1q-tunnel'. Other network gear vendors have a similar feature but call it by different names (D-Link refers to it as QinQ VLANs, Netgear as Double VLANs). Eric Garver has been driving this integration and the kernel code was merged in 4.9, he's subsequently been hard at work getting his code merged in to OVS to complete the feature.

It would be immensely usefull to be able to utilise this within the Proxmox GUI, so that one can set a guest network interface port as either 'access', 'trunk' or 'dot1q-tunnel'.

Sample OVS command:
ovs-vsctl set port ovs-p0 vlan_mode=dot1q-tunnel tag=4094
 
please file a feature request at bugzilla.proxmox.com
 
We extensively make use of QinQ VLANs and were limited to the Linux bridge. The Linux Bridge is however not VLAN aware so this was causing other frustrations that we had to constantly work around.

I've updated Bugzilla as well but we're happy to report that 'dot1q-tunnel' support in OVS works beautifully...

Eric Garver, the person that is primarily responsible for this ultimately landing in OVS and the kernel, published some very interesting articles regarding performance (software QinQ is on-par with hardware accelerated VXLAN) and usage:
https://developers.redhat.com/blog/2017/06/06/open-vswitch-overview-of-802-1ad-qinq-support/
https://developers.redhat.com/blog/2017/06/27/open-vswitch-qinq-performance/


We've completed testing and are planning to move the first couple of production virtual routers and firewalls on to this.


Herewith our notes:
Code:
--- /root/Network.pm    2018-02-19 12:41:12.000000000 +0200
+++ /usr/share/perl5/PVE/Network.pm     2018-04-05 11:42:12.904719562 +0200
@@ -251,9 +251,9 @@
     $trunks =~ s/;/,/g if $trunks;

     my $cmd = "/usr/bin/ovs-vsctl add-port $bridge $iface";
-    $cmd .= " tag=$tag" if $tag;
-    $cmd .= " trunks=". join(',', $trunks) if $trunks;
-    $cmd .= " vlan_mode=native-untagged" if $tag && $trunks;
+    $cmd .= " vlan_mode=dot1q-tunnel tag=$tag other-config:qinq-ethtype=802.1q" if $tag;
+    $cmd .= " cvlans=". join(',', $trunks) if $trunks && $tag;
+    $cmd .= " trunks=". join(',', $trunks) if $trunks && !$tag;

     $cmd .= " -- set Interface $iface type=internal" if $internal;
     system($cmd) == 0 ||


This sets the VM's network adapter port as a 'dot1q-tunnel' port which will subsequently encapsulate all packets on ingress and pop the outer tag on egress. Voilla router on a stick, firewall on a stick, etc...
 
You could check if a vlan-aware linux bridge with `/sys/class/net/vmbr0/bridge/vlan_protocol` set to 0x88a8 would work. We do currently only expose the vlan filtering flag in the network settings, adding the vlan protocol might be useful for some. Currently this has to be added manually with post-up scripts in /etc/network/interfaces.
 
Running virtual firewalls or routers, without requiring many interfaces, is possible by using QinQ VLANs. The different sites, zones or services are delivered to the VLAN aware virtual guest using an outer customer delivery VLAN. This essentially requires Proxmox to pop the outer delivery VLAN tag on egress to the virtual and to wrap all frames originating from the virtual in the same VLAN tag.

A virtual router example:
Code:
vlan950 - Customer A
  vlan10 - internet
  vlan11 - hosting
  vlan12 - Site A
  vlan13 - Site B
  vlan14 - Site C

The packet would subsequently leave the VM host with an outer tag of 950 and an inner tag of 10, when egressing the customer's virtual router towards the internet.

This can either be achieved by running Proxmox with the non-VLAN aware Linux bridge or by using a newer openvswitch package, to take advantage of QinQ VLAN capabilities in kernel 4.9 or later. The non-VLAN aware Linux bridge is not ideal as hosting virtual routers and virtual firewalls would confuse the Linux bridge as the source MAC address would appear to originate from different directions.

Setting the 'vlan_protocol' option on the VLAN aware Linux bridge to 802.1ad (0x88a8), instead of the default 802.1Q (0x8100), may work but all switches and routers would then need to handle the outer tag being a service provider VLAN tag.


Eric Garver headed up the development of the QinQ VLAN patches for OVS as well as the kernel changes. He's additionally posted about the performance of this, showing it to perform better than hardware assisted VXLAN or GENEVE:

Proxmox 5.0 and later has the required kernel to support this natively and subsequently also supports later versions of OVS. Debian Sid's openvswitch 2.8.1+dfsg1-6 packages are stable and run on Debian Stretch without problems.


Applying the patch to '/usr/share/perl5/PVE/Network.pm' (detailed above) simply makes the VM interface an encapsulation port and would provide additional security in that it wouldn't be possible to hop VLANs, as there subsequently wouldn't be a native VLAN on the port. It's perfectly compatible with simple hosts (eg Windows guests) but provides the ability to QinQ VLANs when required. The VLAN trunking option remain functional the same way it is right now...
 
Hi David

Thanks for coming back to me, And thanks for sharing your Knowledge, have one more question, We also make extensive use of QinQ and QinQinQ ,

after upgrading to PVE6 and making the changes to /usr/share/perl5/PVE/Network.pm , can the options for below use be done in the Gui ? and is the ovs-vsctl command run Automatically? And how is this affected in a HA enviroment or when migrating VM's between hosts?

- Attaching a virtual's network adapter to the bridge and specifying a VLAN ID:
VM's network configuration line:
net0: virtio=E4:8D:8C:82:94:97,bridge=vmbr0,tag=1
Generated command:
/usr/bin/ovs-vsctl add-port vmbr0 tap101i0 vlan_mode=dot1q-tunnel tag=1 other-config:qinq-ethtype=802.1q
Result:
Virtual router can communicate with all other network devices perfectly, herewith examples:
Interface - VM - Network = Testing
ether1 - Untagged - VLAN 1 = OK
ether1-vlan50 - 802.1Q:1 - VLAN 1 with QinQ 50 = OK
ether1-vlan50-vlan10 - 802.1Q:1_802.1Q:50 - VLAN 1 with QinQinQ 50:10 = OK
ether1-vlan60 - 802.1Q:1_802.1Q:60 - VLAN 1 with QinQ 60 = OK
 
Everything works as expected, live migration, GUI management, etc...

Tested and working in production with untagged (trunks all VLANs), selective trunking or specifying a tag, in which case the OvS port encapsulates all packets received from the VM. This is the same as connecting a firewall or router to a Cisco switch where the port has been configured as 'dot1q-tunnel'.
 
Hi David

Thanks for you response, I have added this support and it seems to work very well, I do have another question, Do you know if it si possible to do double QinQ tagging? So a double Stag? So something like this

Interface - VM - Network
ether1-vlan50-vlan10 - Untagged - QinQinQ 50:10:untagged

In linux it can be done like below

ip link add link eth0 name eth0.50 mtu 9000 type vlan id 50
ip link add link eth0.50 eth0.50.20 mtu 8996 type vlan id 10
 
Probably not without customising the GUI and script you've already edited. OvS does support having the host handling the double tag injection. We however haven't made patches as we treat OvS like any other standard switch.

We trunk all or selective VLANs to a few virtual routers and attach servers, tenant routers and firewalls using the dot1q-tunnel mode.
The virtuals are then free to stack as many tags as they wish using the maximum system MTU size (typically 64KiB) and then still hand the packet off for hardware offloading, which can either transmit the packet as is or add a tag.

ie: Virtual routers, firewalls and servers can run their own VLANs and a dot1q-tunnel port would encapsulate the lot.

VM would run VLAN 10, Proxmox would add VLAN 50.
 
Last edited:
Hi,

Eric Garver's post details examples on configuring QinQ ports on the OvS side so that the VM simply interacts with untagged packets. This is not something we currently have a requirement for and requires two rules to specifically configure ingress and egress operations for each VM interface.

The post is available here:

As an overview:
ovs-vsctl set Open_vSwitch . other_config:vlan-limit=2
ovs-appctl revalidator/purge
ovs-ofctl in_port=1 action=push_vlan:0x88a8,mod_vlan_vid=1000,output:2
ovs-ofctl in_port=2 action=pop_vlan,output:1​

PS: The first two commands globally configure OvS to match on two VLAN tags and should probably be in a system initialisation script. The 3rd and 4th commands would need to be run by the /usr/share/perl5/PVE/Network.pm script where you would probably need additional logic to lookup the OvS port numbers for your actions.

As a comparative, the dot1q-tunnel mode automates this to a certain extent by simply requiring a single command. The following QinQs packets when receiving packets tagged with either VLAN 100 or 200:
ovs-vsctl set port ovs-p0 vlan_mode=dot1q-tunnel tag=1000 cvlans=100,200​


As I've tried to explain above, we primarily run virtual routers and firewalls which run VLANs and then simply wrap all packets egressing the VM in another VLAN. The resulting packet is QinQ tagged but the VM handles the inner tags and OvS simply pushes and pops the outer tags.
 
...
This sets the VM's network adapter port as a 'dot1q-tunnel' port which will subsequently encapsulate all packets on ingress and pop the outer tag on egress. Voilla router on a stick, firewall on a stick, etc...

This is exactly the behavior I'm currently looking to achieve. I am implementing the suggested change in our test system, but realized I'm hesitant to use the configuration before it is natively implemented as the transition back may be a challenge.

I haven't seen any updates on actually implementing this feature natively. Has there been any progress regarding OVS in PVE 6 using dot1q-tunneling?
 
libpve-common-perl 6.1 has a new format, herewith appropriate patches to provide dot1q-tunnel mode on the OvS bridge port that the virtual machine's network card attaches to:
Code:
--- /usr/share/perl5/PVE/Network.pm.orig        2020-05-08 16:54:14.734230861 +0200
+++ /usr/share/perl5/PVE/Network.pm     2020-05-08 16:55:14.739249932 +0200
@@ -249,8 +249,10 @@
     # first command
     push @$cmd, '--', 'add-port', $bridge, $iface;
     push @$cmd, "tag=$tag" if $tag;
-    push @$cmd, "trunks=". join(',', $trunks) if $trunks;
-    push @$cmd, "vlan_mode=native-untagged" if $tag && $trunks;
+    push @$cmd, "vlan_mode=dot1q-tunnel" if $tag;
+    push @$cmd, "other-config:qinq-ethtype=802.1q" if $tag;
+    push @$cmd, "cvlans=". join(',', $trunks) if $trunks && $tag;
+    push @$cmd, "trunks=". join(',', $trunks) if $trunks && !$tag;

     if ($internal) {
        # second command

You don't need to do change anything else, besides be running OvS already. Proxmox 6.0+ contains all components to support this.

This essentially turns the port in to a UNI QinQ port which wraps both untagged and tagged packets arriving in another VLAN and strips this outer VLAN on delivery. This provides protection against native VLAN hopping security risks and allows for virtuals to run VLANs to segregate traffic.

If you don't specify a VLAN ID on the VM's network interface the resulting OvS port will essentially function as a trunk port with access to all VLANs flowing through OvS (this behaviour is unchanged by our patch).

If you specify a VLAN ID, OvS will pop the tag on delivery and wrap everything ingressing in a VLAN tag.

The optional 'trunks' parameter also works, if you wish to limit which VLANs to allow.
 

Attachments

  • vm_router_no_vlan.png
    vm_router_no_vlan.png
    6.5 KB · Views: 19
  • Like
Reactions: pieteras.meyer

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!