QinQ on Hetzner vSwitch

reflect

Member
Nov 13, 2019
11
5
8
Hello there,

I am currently struggling to set up QinQ on my Proxmox Cluster.
My servers are running on the Hetzner infrastructure and are connected via the Hetzner vSwitch feature.

There is a maximum of five vSwitches, which can be connected to a Server, but i want to isolate more than five DMZ networks from each other. Thats why i'm trying to set up VLANs on top of the Hetzner vSwitch VLAN.

According to this thread my current configuration should theoretically work, but the VMs are unable to ping each other, so it seems that i'm missing something.


Code:
iface enp35s0 inet manual

...

iface enp35s0.4040 inet manual
  vlan-raw-device enp35s0
  mtu 1400

...

auto vmbr40
iface vmbr40 inet manual
  bridge_ports enp35s0.4040
  bridge_stp off
  bridge_fd 0
#DMZ

The VMs are then connected to the vmbr40 and the underlying tag is configured on the Proxmox GUI directly.
1576922243733.png

I've connected two virtual machines this way and gave them static IPs. Unfortunately the machines are unable to communicate with each other on this interface.

I'd be thankful for any hint or comment.
Wish you all a happy holiday!
 
Hello there,

I am currently struggling to set up QinQ on my Proxmox Cluster.
My servers are running on the Hetzner infrastructure and are connected via the Hetzner vSwitch feature.

There is a maximum of five vSwitches, which can be connected to a Server, but i want to isolate more than five DMZ networks from each other. Thats why i'm trying to set up VLANs on top of the Hetzner vSwitch VLAN.

According to this thread my current configuration should theoretically work, but the VMs are unable to ping each other, so it seems that i'm missing something.


Code:
iface enp35s0 inet manual

...

iface enp35s0.4040 inet manual
  vlan-raw-device enp35s0
  mtu 1400

...

auto vmbr40
iface vmbr40 inet manual
  bridge_ports enp35s0.4040
  bridge_stp off
  bridge_fd 0
#DMZ

The VMs are then connected to the vmbr40 and the underlying tag is configured on the Proxmox GUI directly.
View attachment 13589

I've connected two virtual machines this way and gave them static IPs. Unfortunately the machines are unable to communicate with each other on this interface.

I'd be thankful for any hint or comment.
Wish you all a happy holiday!

The issue may be that the vSwitch feature from hetzner is probably already making use of some form of QinQ (they would hit the max VLAN's on their switches very quickly otherwise) so you probably will need to drop their support a ticket to see if it is something that is actually supported.
 
After reaching out to the Hetzner Support, they confirmed to me that neither QinQ nor VXLAN is possible on top of the Hetzner vSwitches. Due to the maximum number of 5 vSwitches which can be connected to a dedicated Server and the lack of VLAN Support on the physical private switches, it seems to be impossible to create more than 6 isolated networks between multiple Hetzner Servers (5 via vSwitches, +1 via the single physical private network).

If anyone has any idea on how it could be possible to further isolate various virtual machines in the DMZ from each other, i'd appreciate some feedback.

For now, i will be limited to 2 seperate DMZ networks, because the others are already occupied by Corosync, Ceph, LAN, and WAN.
 
I think Hetzner vswitch already used vxlan internally, so that could explain why you can do vxlan or qinq.

I'm still currently working on sdn feature for proxmox (vxlan routing with anycast gateway). I'm targetting proxmox 6.2 to get it ready.
When it'll be done, you'll be do manage vxlan easily without vswitches
 
  • Like
Reactions: reflect
I think Hetzner vswitch already used vxlan internally, so that could explain why you can do vxlan or qinq.

I'm still currently working on sdn feature for proxmox (vxlan routing with anycast gateway). I'm targetting proxmox 6.2 to get it ready.
When it'll be done, you'll be do manage vxlan easily without vswitches

Thanks for the feedback. I have no experience in SDN whatsoever, so i'm not quite sure on how this works. This would open the possibility to create multiple seperated networks on a single private interface that doesn't support VLANs? Do you have any information on the possible release date of that functionality, it sounds really great. Thanks a lot for your work.

I reached out to the Hetzner Support again, asking for advice on how i could achieve my goal on their infrastructure. It turns out the 5-Port 1 GBit/s Switch (2€ excl. VAT) i ordered doesn't support VLANs. However the 12-Port 10 GBit/s Switch (43€ excl. VAT) they offer is an Ubiquiti ES-16-XG (SFP+) which in fact does support VLAN (see Root Server Hardware for more information).

I am unsure if the additional 41€ is worth it. It would however open the possibility to upgrade our internal network from 1 GBit/s to 10 GBit/s, which might be a good idea, since there will probably be a lot of traffic with Ceph, Corosync, pfSync, etc.
 
>>Thanks for the feedback. I have no experience in SDN whatsoever, so i'm not quite sure on how this works. This would open the possibility to create >>multiple seperated networks on a single private interface that doesn't support VLANs?
yes. it'll be using vxlan (a simple layer2 tunnel, or layer3 with routing/each proxmox host will have same ip as gateway for the vm).
So it should work on any network if no vxlan is already used by the underlay network.

>>Do you have any information on the possible release date of that functionality, it sounds really great. Thanks a lot for your work.
It's almost ready, it should be ready for proxmox 6.2 max, maybe before (2-3months max I think)
 
After reaching out to the Hetzner Support, they confirmed to me that neither QinQ nor VXLAN is possible on top of the Hetzner vSwitches. Due to the maximum number of 5 vSwitches which can be connected to a dedicated Server and the lack of VLAN Support on the physical private switches, it seems to be impossible to create more than 6 isolated networks between multiple Hetzner Servers (5 via vSwitches, +1 via the single physical private network).

If anyone has any idea on how it could be possible to further isolate various virtual machines in the DMZ from each other, i'd appreciate some feedback.

For now, i will be limited to 2 seperate DMZ networks, because the others are already occupied by Corosync, Ceph, LAN, and WAN.
I actually got this working using vxlan on Hetzner
Does this mean that it actually does work and that the employee was wrong, or that it might cause issues at any time and I should stop doing this and find another solution?


This is the setup, in /etc/network/interfaces:

The vmbr0.4002 interface is the Hetzner vSwitch vlan in which all 3 proxmox nodes are present

proxmox node1:
Code:
auto vmbr0.4002
iface vmbr0.4002 inet static
        address 192.168.111.14/24
        mtu 1400

auto vmbr1
iface vmbr1 inet manual
        bridge-ports vxlan2
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 1400

auto vxlan2
iface vxlan2 inet manual
        vxlan-id 2
        vxlan_remoteip 192.168.111.15
        vxlan_remoteip 192.168.111.16
        mtu 1400

proxmox node2:
Code:
auto vmbr0.4002
iface vmbr0.4002 inet static
        address 192.168.111.15/24
        mtu 1400

auto vmbr1
iface vmbr1 inet manual
        bridge-ports vxlan2
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 1400

auto vxlan2
iface vxlan2 inet manual
        vxlan-id 2
        vxlan_remoteip 192.168.111.14
        vxlan_remoteip 192.168.111.16
        mtu 1400

proxmox node3:
Code:
auto vmbr0.4002
iface vmbr0.4002 inet static
        address 192.168.111.16/24
        mtu 1400

auto vmbr1
iface vmbr1 inet manual
        bridge-ports vxlan2
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
        mtu 1400

auto vxlan2
iface vxlan2 inet manual
        vxlan-id 2
        vxlan_remoteip 192.168.111.14
        vxlan_remoteip 192.168.111.15
        mtu 1400

You can then create vms and assign them vmbr1 with a vlan of your choice.

The MTU in your vms will need to be lower than 1400, the vxlan encapsulation overhead is 50 bytes, so all your vms will need to have an MTU of 1350 or less. Can someone confirm that 50 bytes is enough / safe? I have not had any problems (so far ...).

If you have a gateway like pfSense, assign vmbr1 without a tag, and you then can define the vlans in Pfsense (dont forget to set the MTU to 1350 there as well)
 
  • Like
Reactions: RoffDaniel

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!