Proxmox VE and VLAN tag stacking

skraw

Well-Known Member
Aug 13, 2019
77
1
48
57
Hello all,

can anyone confirm that VLAN tag stacking works with Proxmox VE and bridging configured as "vlan-aware"?
We are trying to make the following work:
Many VLANs arrive at the host encapsulated in an outer VLAN 1008.
Host bridge is vlan-aware. Guest gets an interface with tag=1008. So the outer VLAN tag should be stripped away. Guest handles all inner VLANs with separate vlan-interfaces. Outgoing traffic (of all guest vlan interfaces) is again encapsulated in VLAN1008 and switched to its destination (outside host).
Does this work as expected?
--
Regards,
Stephan

PS: Let me explain this more detailed. It looks to us like the way in to the guest does in fact work. The (outer) tag is stripped and the vlans seem to be visible to the guest. But on the way out from the guest the outer tag does not seem to be added again. But this is exactly the feature we need. A tag has to be added back on outgoing packets no matter if they are already tagged or not. This is kind of a question around the tap-device. We found no valid information on the net about tag stacking with tap devices ...
 
Last edited:
with vlan-aware brige, it's a little bit complex, because you need 2 bridge for each vlan.

I think it's working with ifupdown2

Code:
auto vmbr0
iface vmbr0 inet manual
       bridge-ports eth0
       bridge-stp off
       bridge-fd 0
       bridge-vlan-aware yes

auto vmbr1
iface vmbr1 inet manual
       bridge-ports vmbr0.408
       bridge-stp off
       bridge-fd 0
       bridge-vlan-aware yes


But it can work with non vlan aware bridge too

Code:
auto vmbr0
iface vmbr0 inet manual
       bridge-ports eth0.1008
       bridge-stp off
       bridge-fd 0

then, when you add vlan "X" to a vm, proxmox will create a vmbr0vX with eth0.1008.X
 
  • Like
Reactions: Stoiko Ivanov
Ok, I can confirm that your second code part works. I tried that.
Unfortunately it does not solve the problem. It is the details that are complex:
- I need one vlan-aware bridge for guests that should be attached to a single "inner" vlan, basically needing an interface like eth0.1008.25.
- On the other hand I need a bridge for the outer vlan to attach guests that should handle the inner vlans themselves, meaning an interface like eth0.1008 - and I mean _for the same outer vlan_.
Since you cannot create two bridges with the same bridge-port it is vital that one bridge can handle both, the vlan-awareness and the simple bridging of all vlans to one (another) guest.
Is it possible to attach a guest to a vlan-aware bridge without tagging a certain vlan for it (so it simply gets all vlans)?

PS: what do you mean by "ifupdown2"?
 
I think this should work

withtout vlan aware:

Code:
auto vmbr0
iface vmbr0 inet manual
       bridge-ports eth0.1008
       bridge-stp off
       bridge-fd 0

auto vmbr0v25 
iface vmbr0v25 inet manual
       bridge-ports eth0.1008.25
       bridge-stp off
       bridge-fd 0



PS: what do you mean by "ifupdown2"?
ifupdown2 is a new package, which replace ifupdown. this is the software which manage /etc/network/interfaces to create/manage interfaces.
an ifupdown2 support more syntax. (apt install ifupdown2)
 
Hm, which means I would have to hand-configure all inner vlans needed by guests as bridges on every host of the cluster...
I guess that's what "vlan-aware" was intended to replace :)

Just checked: I have installed:
ifupdown/stable,now 0.8.35 amd64 [installed]
 
Hm, which means I would have to hand-configure all inner vlans needed by guests as bridges on every host of the cluster...
I guess that's what "vlan-aware" was intended to replace :)

How/where do your guest configure the inner vlan ? on the proxmox gui ? or in the vm, tagging interface directly ?

For the proxmox gui, my solution should work,
users can use vmbr0 for the vm interface, and choose inner vlan tag.
user can choose vmbr0v25, where the inner tag is already defined.
 
There are two use-cases by the guests:
1) One type of guest uses the bridged outer vlan and tags the inner vlan itself inside the guest linux
2) Another type of guest uses inner vlan host bridges as simple interfaces to communicate to type 1 guests vlan interface (defined inside the guest)

The host bridges from 2) are your vmbr0v25 example equivalent.
The bridged outer vlan from 1) is your vmbr0

If there are 100 guests of type 2, and you have 10 hosts, there are 1000 additional bridges to define in /etc/network/interfaces (100 per host). Working but not very nice to look at.
You think that will work with standard ifupdown package?
 
Ok, the double vlan tagged interface definition does not work:

# ifup enp5s0f0.1007.8
Error: 8021q: VLAN device already exists.
ifup: ignoring unknown interface enp5s0f0.1007.8=enp5s0f0.1007.8
 
PS: I tried to install your ifupdown2 package but shot down a cluster node by doing that. I do not quite understand why it takes down the whole node and restarts just because of this installation ...

PS2: After rebooting the node the dual tagged interface came up, which probably means ifupdown2 works.
Interestingly your bridge definition as "vmbr0v25" (or the like) shows "unknown" in the PMX GUI as Interface type. So I renamed it to "vmbr5" and then it shows "linux bridge".
 
Last edited:
Now we encountered other problems:

1) ARP between the external inner vlan and the newly created bridge does not work, in fact it seems indeed no arp at all works. The guest with its own vlan config (from other bridge) cannot arp to the guest on the inner vlan host bridge and both cannot arp to an external switch.

2) the interface naming is a problem. Interface names are required to be 15 chars or shorter. We tried to escape problem 1 by configuring a new interface enp5s0f0.1007.28 (instead of 8) and got an error for name length on ifup.
 
PS: I tried to install your ifupdown2 package but shot down a cluster node by doing that. I do not quite understand why it takes down the whole node and restarts just because of this installation ...
it need to to restart network, but whole node I'm not sure ..


Interestingly your bridge definition as "vmbr0v25" (or the like) shows "unknown" in the PMX GUI as Interface type. So I renamed it to "vmbr5" and then it shows "linux bridge".
be carefull that if you define a vlan25 in a vm on vmbr0, proxmox will create vmbr0v25, and it could break other vmbr5.

1) ARP between the external inner vlan and the newly created bridge does not work, in fact it seems indeed no arp at all works. The guest with its own vlan config (from other bridge) cannot arp to the guest on the inner vlan host bridge and both cannot arp to an external switch.
mmm, that seem the correct behaviour ? arp and other network traffic can only work in the same inner+outer vlan .

2) the interface naming is a problem. Interface names are required to be 15 chars or shorter. We tried to escape problem 1 by configuring a new interface enp5s0f0.1007.28 (instead of 8) and got an error for name length on ifup.
maybe try to use old ethX naming, adding "net.ifnames=0 biosdevname=0" to grub option ?




BTW, I'm working on a new proxmox network management (sdn feature), at datacenter level. (define once, it's apply to all nodes).
It's not yet ready, but it should help a lot for your case. (it's suporting vxlan, vlan stack, ifupdown2 + dynamic reload,....)
I think it should be available soon (2-3months)
 
it need to to restart network, but whole node I'm not sure ..



be carefull that if you define a vlan25 in a vm on vmbr0, proxmox will create vmbr0v25, and it could break other vmbr5.


mmm, that seem the correct behaviour ? arp and other network traffic can only work in the same inner+outer vlan .

No it is not. The guest attached to the outer vlan does of course create itself several vlans (the inner ones). And inside these vlans only those arp work that are not used in the host bridge attached to the inner vlan. Believe me that I know a thing or two about networking, and I have already debugged an equivalent setup based on qemu and bridges years ago.
Arp is really completely broken in this use case. The Arp requests are not even reaching the guest with its self created inner vlan interface. I thought at first it works, but the arp tables were left over from the old setup with singular qemus and bridges on another host inside the same vlan. Since we did not change the macs while transfering the guest to proxmox it did work. But today the ifupdown2 packages made all cluster nodes reboot during installation and after that arp was dead for this case. As soon as we detached the host bridge based on the double tagged interface everything worked again with arp inside this very vlan.
 
I think the main problem with classic interface double tagging, is that bridge don't use vlan at all.
(vlan are simply tagged/untagged with going through the physical tagged interface).
This is the main difference with vlan aware bridge. But vlan aware bridge can only tag 1vlan, that's why it's needed to stack 2 vlan aware bridge, 1 for inner and 1 for outer vlan.

But today the ifupdown2 packages made all cluster nodes reboot during installation and after that arp was dead for this case.
do you mean ifupdown2 installed only on 1 node, make all cluster nodes reboot ??? (Maybe do you use HA and corosync break ? If yes, I recommand you to disable HA

No it is not. The guest attached to the outer vlan does of course create itself several vlans (the inner ones). And inside these vlans only those arp work that are not used in the host bridge attached to the inner vlan. Believe me that I know a thing or two about networking, and I have already debugged an equivalent setup based on qemu and bridges years ago.
maybe I don't unsterdant correctly your setup ;) can you make a small example schema with bridge,vlans, tag, vms ips, communication flow (which vm need to access to vms, etc...)
 
do you mean ifupdown2 installed only on 1 node, make all cluster nodes reboot ??? (Maybe do you use HA and corosync break ? If yes, I recommand you to disable HA

No sorry, this was probably too bad of an explanation from my side. Of course I had to install ifupdown2 on every cluster node to be able to migrate the vm in question all over the cluster. And on every single installation the corresponding node booted right after apt-get install. In fact since I ssh'd to the nodes, the session died right after ifenslave got deinstalled. Obviously the ifupdown2 was correctly installed as everything looked normal after the reboots.
And these reboots cleared all arp tables and then it showed that the double tagged interface(s)/bridges had problems finding connected hosts at all. Whereas the cluster-external equipment connected to the inner vlans (which are indeed single switch ports) worked all but the one double tagged and bridged on the host.
I try to make a drawing for you, but it will not be that easy.
 
Got an idea. Maybe everything is a lot more transparent if I describe a potential setup for this whole story.
Lets assume you have 5 boxes with one lan port each.
Lets assume you have 2 switches.
Lets assume you have a proxmox cluster.

Now connect the 5 boxes to switch 1 and tag each switch port with a vlan-id called iv1-5.
Now connect switch 1 trunk port of combined iv1-5 to a port on switch 2, where you tag this port with another vlan call it ov1.

Assume switch 2 has other vlans that it should combine with ov1 to one trunk port for the proxmox cluster.
The proxmox cluster contains a vm used as router. This router should be able to handle the lans from switch 1. So it needs access to iv1-5.
So far everything can work if the proxmox uses a tagged interface as bridge port (not vlan-aware) and the router vm gets this bridge to setup its own interfaces for iv1-5. This works.
First problem: it would be a lot easier to use a vlan-aware bridge and be able to strip ov1 on the router interface / vm host bridge. But that does not work.

Now lets assume a special case: on switch 1 you have 6 boxes, not 5, where 2 boxes are connected to the same vlan (maybe simply two access ports for iv1 configured). Still everything works.

Now go and virtualise one of these boxes from iv1 as vm on the proxmox cluster.
Now you _should_ be able to:
1) Have iv1 as non-tagged interface for the vm (you remember eth0.1008.25 from your example)
2) Have this vm talk to the physical box connected to switch 1 iv1

And exactly this does not work at all. As soon as you have this vm up it cannot arp the router vm, it cannot arp the box on iv1.
Router vm can arp all other physical boxes (and they can arp back of course).
The router vm cannot arp the vm/box and cannot arp the physical box on iv1. And of course it cannot either. So nothing connected in any way to iv1 can see anything, whereas iv2-5 go on working as expected.

To me this setup looks trivial. But in terms of bridge/vlan/proxmox it seems not.
The problem is: it should work with the bespoken config, but try yourself. It does not.
 
Hi,

I found some kind of workaround, not sure it could work for you setup

Code:
auto vmbr0
iface vmbr0 inet manual
       bridge-ports eth0
       bridge-stp off
       bridge-fd 0
       bridge-vlan-aware yes
       bridge-vids 2-4000

auto vmbr1
iface vmbr1 inet manual
       bridge-ports vmbr0.25
       bridge-stp off
       bridge-fd 0
       bridge-vlan-aware yes
       bridge-vids 2-4000
       bridge-pvid 1008

with a vm1 on vmbr0, with tag=1008 on interface
and a vm2 on vmbr1, with tag=25 on vm interface


using ifupdown2.
(bridge-vids is mandatory with ifupdown2, to allow vlans. It's was done auto to 2-4096 with ifupdown1).

"bridge-pvid 1008" is a trick, it's removing the tag=25 when packet is going from vmbr1 to vmbr0, so you only see have tag=1008 in vmbr0 (instead double tag), and packet can go to vm1.
That's works inside proxmox, but If packet need to go outside eth0, I think you'll have only the tag=1008.

Without this trick, ipacket is double tagged (1008,25) from vm1 when going to vmbr0, then when going to vm2,the tag1008 is stripped, and packet in vm2 is still tagged 25. (or you need to handle tag=25 inside vm2 to accept packet)
 
Ok, I understand your idea. But since the tagging is needed outside the proxmox cluster this is no option.
But I found a solution that you already mentioned elsewhere. There is only one additional bridge needed:

auto vmbr4
iface vmbr4 inet manual
bridge-ports enp5s0f0.1007
bridge-stp off
bridge-fd 0
#Bridge HousingLAN

auto vmbr5
iface vmbr5 inet manual
bridge-ports vmbr4.8
bridge-stp off
bridge-fd 0
#Bridge HousingVLAN8

There is no vlan-awareness, but it is not needed as long as the hosts connected to vmbr4 do the vlans themselves.
The problem with the last idea was that the bridge-port enp5s0f0.1007.8 cannot broadcast into vmbr4 and is therefore more or less isolated.
But a bridge as port can do that - and it works! (and the name length problem is gone implicitely :)

PS: Thanks for your input and suggestions
 
I think that with your solution, the tag vlan8 will be dropped when packet is going from vmbr5 to vmbr4. (because vmbr4 is not vlan aware),
so when packet will go outside through enp5s0f0.1007 , you'll only see the 1007 tag.
 
That's wrong. It _does_ work in every direction.
A packet from vmbr5 to vmbr4 is tagged because the bridge port of vmbr5 is named vmbr4.8. So on an outgoing packet the 8 vlan id will be tagged on.
The same goes one step deeper. As vmbr4 has enp5s0f0.1007 as bridge port all packets outgoing there will be tagged with 1007 (and I can confirm _additionally_ tagged (stacked)). Believe me, this works. I now have a vlan of 3 boxes running, two inside the proxmox cluster and one outside on switch 1 on an access port of vlan 8. They all see each other, and they see the router with its own configured vlan 8 hanging on bridge vmbr4.
 
I'm having a bit of trouble following this. I speak cisco...any chance someone has done a writeup comparing these configs to what it would look like in a managed switch? I checked the 'vlan aware' box hoping it would be treated as a trunk and I would be able to pass tagged data from the switch to the host machine's port.

If I've understood correctly if I wanted to trunk ALLLL the vlans through from a switch to the host machine I would have to do it individually for each vlan?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!