Proxmox 5.0 and OVS with 'dot1q-tunnel'

Discussion in 'Proxmox VE: Networking and Firewall' started by David Herselman, Apr 10, 2017.

  1. David Herselman

    David Herselman Active Member
    Proxmox Subscriber

    Joined:
    Jun 8, 2016
    Messages:
    198
    Likes Received:
    39
    I'm extremely happy to see Proxmox using Debian and that Debian Stretch will feature kernel 4.10. This includes recently merged QinQ VLAN support which should accelerate OVS substantially but, more importantly to us, provides the necessary kernel support for 'dot1q-tunneling'.

    Cisco switches have a feature to encapsulate both untagged and tagged frames within another VLAN on ingress, this is accomplished by setting a port to 'dot1q-tunnel'. Other network gear vendors have a similar feature but call it by different names (D-Link refers to it as QinQ VLANs, Netgear as Double VLANs). Eric Garver has been driving this integration and the kernel code was merged in 4.9, he's subsequently been hard at work getting his code merged in to OVS to complete the feature.

    It would be immensely usefull to be able to utilise this within the Proxmox GUI, so that one can set a guest network interface port as either 'access', 'trunk' or 'dot1q-tunnel'.

    Sample OVS command:
    ovs-vsctl set port ovs-p0 vlan_mode=dot1q-tunnel tag=4094
     
  2. fabian

    fabian Proxmox Staff Member
    Staff Member

    Joined:
    Jan 7, 2016
    Messages:
    3,390
    Likes Received:
    523
    please file a feature request at bugzilla.proxmox.com
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. David Herselman

    David Herselman Active Member
    Proxmox Subscriber

    Joined:
    Jun 8, 2016
    Messages:
    198
    Likes Received:
    39
  4. David Herselman

    David Herselman Active Member
    Proxmox Subscriber

    Joined:
    Jun 8, 2016
    Messages:
    198
    Likes Received:
    39
    We extensively make use of QinQ VLANs and were limited to the Linux bridge. The Linux Bridge is however not VLAN aware so this was causing other frustrations that we had to constantly work around.

    I've updated Bugzilla as well but we're happy to report that 'dot1q-tunnel' support in OVS works beautifully...

    Eric Garver, the person that is primarily responsible for this ultimately landing in OVS and the kernel, published some very interesting articles regarding performance (software QinQ is on-par with hardware accelerated VXLAN) and usage:
    https://developers.redhat.com/blog/2017/06/06/open-vswitch-overview-of-802-1ad-qinq-support/
    https://developers.redhat.com/blog/2017/06/27/open-vswitch-qinq-performance/


    We've completed testing and are planning to move the first couple of production virtual routers and firewalls on to this.


    Herewith our notes:
    Code:
    --- /root/Network.pm    2018-02-19 12:41:12.000000000 +0200
    +++ /usr/share/perl5/PVE/Network.pm     2018-04-05 11:42:12.904719562 +0200
    @@ -251,9 +251,9 @@
         $trunks =~ s/;/,/g if $trunks;
    
         my $cmd = "/usr/bin/ovs-vsctl add-port $bridge $iface";
    -    $cmd .= " tag=$tag" if $tag;
    -    $cmd .= " trunks=". join(',', $trunks) if $trunks;
    -    $cmd .= " vlan_mode=native-untagged" if $tag && $trunks;
    +    $cmd .= " vlan_mode=dot1q-tunnel tag=$tag other-config:qinq-ethtype=802.1q" if $tag;
    +    $cmd .= " cvlans=". join(',', $trunks) if $trunks && $tag;
    +    $cmd .= " trunks=". join(',', $trunks) if $trunks && !$tag;
    
         $cmd .= " -- set Interface $iface type=internal" if $internal;
         system($cmd) == 0 ||

    This sets the VM's network adapter port as a 'dot1q-tunnel' port which will subsequently encapsulate all packets on ingress and pop the outer tag on egress. Voilla router on a stick, firewall on a stick, etc...
     
  5. wbumiller

    wbumiller Proxmox Staff Member
    Staff Member

    Joined:
    Jun 23, 2015
    Messages:
    645
    Likes Received:
    84
    You could check if a vlan-aware linux bridge with `/sys/class/net/vmbr0/bridge/vlan_protocol` set to 0x88a8 would work. We do currently only expose the vlan filtering flag in the network settings, adding the vlan protocol might be useful for some. Currently this has to be added manually with post-up scripts in /etc/network/interfaces.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  6. David Herselman

    David Herselman Active Member
    Proxmox Subscriber

    Joined:
    Jun 8, 2016
    Messages:
    198
    Likes Received:
    39
    dot1q-tunnel port setting is working perfectly, I would urge Proxmox to consider standardizing on OVS and merging the relatively trivial patch.
     
  7. jonatan

    jonatan New Member

    Joined:
    May 10, 2018
    Messages:
    1
    Likes Received:
    0
    please add native support for qinq
    +1 for this
     
  8. David Herselman

    David Herselman Active Member
    Proxmox Subscriber

    Joined:
    Jun 8, 2016
    Messages:
    198
    Likes Received:
    39
    Running virtual firewalls or routers, without requiring many interfaces, is possible by using QinQ VLANs. The different sites, zones or services are delivered to the VLAN aware virtual guest using an outer customer delivery VLAN. This essentially requires Proxmox to pop the outer delivery VLAN tag on egress to the virtual and to wrap all frames originating from the virtual in the same VLAN tag.

    A virtual router example:
    Code:
    vlan950 - Customer A
      vlan10 - internet
      vlan11 - hosting
      vlan12 - Site A
      vlan13 - Site B
      vlan14 - Site C
    The packet would subsequently leave the VM host with an outer tag of 950 and an inner tag of 10, when egressing the customer's virtual router towards the internet.

    This can either be achieved by running Proxmox with the non-VLAN aware Linux bridge or by using a newer openvswitch package, to take advantage of QinQ VLAN capabilities in kernel 4.9 or later. The non-VLAN aware Linux bridge is not ideal as hosting virtual routers and virtual firewalls would confuse the Linux bridge as the source MAC address would appear to originate from different directions.

    Setting the 'vlan_protocol' option on the VLAN aware Linux bridge to 802.1ad (0x88a8), instead of the default 802.1Q (0x8100), may work but all switches and routers would then need to handle the outer tag being a service provider VLAN tag.


    Eric Garver headed up the development of the QinQ VLAN patches for OVS as well as the kernel changes. He's additionally posted about the performance of this, showing it to perform better than hardware assisted VXLAN or GENEVE:

    Proxmox 5.0 and later has the required kernel to support this natively and subsequently also supports later versions of OVS. Debian Sid's openvswitch 2.8.1+dfsg1-6 packages are stable and run on Debian Stretch without problems.


    Applying the patch to '/usr/share/perl5/PVE/Network.pm' (detailed above) simply makes the VM interface an encapsulation port and would provide additional security in that it wouldn't be possible to hop VLANs, as there subsequently wouldn't be a native VLAN on the port. It's perfectly compatible with simple hosts (eg Windows guests) but provides the ability to QinQ VLANs when required. The VLAN trunking option remain functional the same way it is right now...
     
  9. pieteras.meyer

    pieteras.meyer New Member
    Proxmox Subscriber

    Joined:
    Aug 13, 2014
    Messages:
    27
    Likes Received:
    1
    Hi Proxmox

    We also require these features, can we please get this implemented into Proxmox natively
     
    jonatan likes this.
  10. David Herselman

    David Herselman Active Member
    Proxmox Subscriber

    Joined:
    Jun 8, 2016
    Messages:
    198
    Likes Received:
    39
    Proxmox 6 now includes OvS 2.10, you simply need to patch /usr/share/perl5/PVE/Network.pm as detailed above (modify 3 lines).

    Thanks for upvoting though!
     
  11. pieteras.meyer

    pieteras.meyer New Member
    Proxmox Subscriber

    Joined:
    Aug 13, 2014
    Messages:
    27
    Likes Received:
    1
    Hi David

    Thanks for coming back to me, And thanks for sharing your Knowledge, have one more question, We also make extensive use of QinQ and QinQinQ ,

    after upgrading to PVE6 and making the changes to /usr/share/perl5/PVE/Network.pm , can the options for below use be done in the Gui ? and is the ovs-vsctl command run Automatically? And how is this affected in a HA enviroment or when migrating VM's between hosts?

    - Attaching a virtual's network adapter to the bridge and specifying a VLAN ID:
    VM's network configuration line:
    net0: virtio=E4:8D:8C:82:94:97,bridge=vmbr0,tag=1
    Generated command:
    /usr/bin/ovs-vsctl add-port vmbr0 tap101i0 vlan_mode=dot1q-tunnel tag=1 other-config:qinq-ethtype=802.1q
    Result:
    Virtual router can communicate with all other network devices perfectly, herewith examples:
    Interface - VM - Network = Testing
    ether1 - Untagged - VLAN 1 = OK
    ether1-vlan50 - 802.1Q:1 - VLAN 1 with QinQ 50 = OK
    ether1-vlan50-vlan10 - 802.1Q:1_802.1Q:50 - VLAN 1 with QinQinQ 50:10 = OK
    ether1-vlan60 - 802.1Q:1_802.1Q:60 - VLAN 1 with QinQ 60 = OK
     
  12. David Herselman

    David Herselman Active Member
    Proxmox Subscriber

    Joined:
    Jun 8, 2016
    Messages:
    198
    Likes Received:
    39
    Everything works as expected, live migration, GUI management, etc...

    Tested and working in production with untagged (trunks all VLANs), selective trunking or specifying a tag, in which case the OvS port encapsulates all packets received from the VM. This is the same as connecting a firewall or router to a Cisco switch where the port has been configured as 'dot1q-tunnel'.
     
  13. pieteras.meyer

    pieteras.meyer New Member
    Proxmox Subscriber

    Joined:
    Aug 13, 2014
    Messages:
    27
    Likes Received:
    1
    Hi David

    Thanks for you response, I have added this support and it seems to work very well, I do have another question, Do you know if it si possible to do double QinQ tagging? So a double Stag? So something like this

    Interface - VM - Network
    ether1-vlan50-vlan10 - Untagged - QinQinQ 50:10:untagged

    In linux it can be done like below

    ip link add link eth0 name eth0.50 mtu 9000 type vlan id 50
    ip link add link eth0.50 eth0.50.20 mtu 8996 type vlan id 10
     
  14. David Herselman

    David Herselman Active Member
    Proxmox Subscriber

    Joined:
    Jun 8, 2016
    Messages:
    198
    Likes Received:
    39
    Probably not without customising the GUI and script you've already edited. OvS does support having the host handling the double tag injection. We however haven't made patches as we treat OvS like any other standard switch.

    We trunk all or selective VLANs to a few virtual routers and attach servers, tenant routers and firewalls using the dot1q-tunnel mode.
    The virtuals are then free to stack as many tags as they wish using the maximum system MTU size (typically 64KiB) and then still hand the packet off for hardware offloading, which can either transmit the packet as is or add a tag.

    ie: Virtual routers, firewalls and servers can run their own VLANs and a dot1q-tunnel port would encapsulate the lot.

    VM would run VLAN 10, Proxmox would add VLAN 50.
     
    #14 David Herselman, Aug 9, 2019
    Last edited: Aug 9, 2019
  15. pieteras.meyer

    pieteras.meyer New Member
    Proxmox Subscriber

    Joined:
    Aug 13, 2014
    Messages:
    27
    Likes Received:
    1
    Hi David

    Thanks, can you give me an example of the ovs-vsctl command to addthe double tag in OVS to similate the below in linux?

    ip link add link eth0 name eth0.50 mtu 9000 type vlan id 50
    ip link add link eth0.50 eth0.50.20 mtu 8996 type vlan id 10
     
  16. David Herselman

    David Herselman Active Member
    Proxmox Subscriber

    Joined:
    Jun 8, 2016
    Messages:
    198
    Likes Received:
    39
    Hi,

    Eric Garver's post details examples on configuring QinQ ports on the OvS side so that the VM simply interacts with untagged packets. This is not something we currently have a requirement for and requires two rules to specifically configure ingress and egress operations for each VM interface.

    The post is available here:

    As an overview:
    ovs-vsctl set Open_vSwitch . other_config:vlan-limit=2
    ovs-appctl revalidator/purge
    ovs-ofctl in_port=1 action=push_vlan:0x88a8,mod_vlan_vid=1000,output:2
    ovs-ofctl in_port=2 action=pop_vlan,output:1​

    PS: The first two commands globally configure OvS to match on two VLAN tags and should probably be in a system initialisation script. The 3rd and 4th commands would need to be run by the /usr/share/perl5/PVE/Network.pm script where you would probably need additional logic to lookup the OvS port numbers for your actions.

    As a comparative, the dot1q-tunnel mode automates this to a certain extent by simply requiring a single command. The following QinQs packets when receiving packets tagged with either VLAN 100 or 200:
    ovs-vsctl set port ovs-p0 vlan_mode=dot1q-tunnel tag=1000 cvlans=100,200​


    As I've tried to explain above, we primarily run virtual routers and firewalls which run VLANs and then simply wrap all packets egressing the VM in another VLAN. The resulting packet is QinQ tagged but the VM handles the inner tags and OvS simply pushes and pops the outer tags.
     
  17. pieteras.meyer

    pieteras.meyer New Member
    Proxmox Subscriber

    Joined:
    Aug 13, 2014
    Messages:
    27
    Likes Received:
    1
    Thanks David.

    Apreciate your help
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice