Multiple VLANs on Bond Bridge

linuxfreak

New Member
Mar 16, 2021
7
0
1
Germany
Hallo zusammen,

ich bin gerade dabei mich in Proxmox einzuarbeiten und will meinen alten ESXi Host damit ablösen.
Bis jetzt läuft die Umstellung echt gut. Nur mit einer Kleinigkeit habe ich Probleme.

Mein Netzwerk sieht wie folgt aus:

Ich habe ein bond0 über zwei nics und dran meine Linux Bridges mit dem den VLAN IDs.
Das funktioniert super.
Nur wenn ich alle VLANs "vmbr0" an dem Bond zu meiner Firewall weiterleiten will kommen die dort nicht an.
Was mache ich falsch?

Bash:
auto lo
iface lo inet loopback

auto ens1f0
iface ens1f0 inet manual

auto ens1f1
iface ens1f1 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves ens1f0 ens1f1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3

auto vmbr0
iface vmbr0 inet manual
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

auto vmbr200
iface vmbr200 inet manual
        bridge-ports bond0.200
        bridge-stp off
        bridge-fd 0

auto vmbr100
iface vmbr100 inet static
        address 192.168.10.21/24
        gateway 192.168.10.1
        bridge-ports bond0.100
        bridge-stp off
        bridge-fd 0
 
Last edited:
Hast du bei vmbr0 auch die "vlan aware" checkbox gesetzt, damit die Linux Bridge mit Tagged TRaffic umgehen kann?
 
Stell auf Open VSwitch um, bei den "traditionellen" Linux Bridges kannst du nicht alle VLANs durchreichen...
 
Danke, mit OVS hat jetzt die Firewall alle VLANs über eine Bridge.
Meinst du mit mischen das ich jetzt an diese OVS Bridge keine Linux Bridge anschließen sollte?
 
Genau. Es funktioniert zwar prinzipiell, wird aber nicht empfohlen. Ich muss gestehen, ich weiß auch nicht genau, wieso.
Anstatt einer VM einer Bridge mit einem speziellen VLAN-Tag zuzuweisen, gibt man das VLAN-Tag direkt bei der Netzwerkconfig der VM ein.
 
  • Like
Reactions: linuxfreak
Hi,
was ist denn nun die Empfehlung für multiple VLANs auf einer Bridge?
In der Doku zu OVS steht:
"[...] supports multiple vlans on a single bridge."

Nach meinem Verständnis würde diese Konfig ohne OVS aber auch funktionieren:
Code:
auto eno1
iface eno1 inet manual

auto eno2
iface eno2 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves eno1 eno2
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3

auto vmbr0
iface vmbr0 inet static
        address 192.168.0.11/24
        gateway 192.168.0.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

THX
 
Also prinzipiell hast du Recht... du kannst damit dann z.B. ne Bridge "vmbr100" anlegen, die als Interface "vmbr0.100" enthält. Durch den Punkt wird signalisiert, dass das danach als VLAN-ID verwendet wird und alle Hosts, die die vmbr100 nutzen, sind dann in deinem VLAN 100.
Allerdings werden die VLANs der Bridge nicht an VMs durchgereicht, wenn du also eine VM mit der vmbr0 verbindest, erhältst du nur das ungetaggte VLAN der vmbr0, nicht aber die getaggten VLANs der Bridge.
 
Allerdings werden die VLANs der Bridge nicht an VMs durchgereicht, wenn du also eine VM mit der vmbr0 verbindest, erhältst du nur das ungetaggte VLAN der vmbr0, nicht aber die getaggten VLANs der Bridge.
Das verstehe ich nicht ganz.
Wenn ich eine VM mit der Bridge vmbr0 verbinde, dann ist das Netzwerk dieser VM dasselbe, also 192.168.0.0/24.

Die weitere Bridge für eine dediziertes VLAN 100 wäre dann:
Code:
auto vmbr100
iface vmbr100 inet static
        address 192.168.100.21/24
        gateway 192.168.100.1
        bridge-ports bond0.100
        bridge-stp off
        bridge-fd 0

Wo ist hier die Einschränkung?
 
With PVE Linux bridges you have two scenarios that I will outline below.

1. Bridge Vlan aware - You're not tagging in the VM configuration inside of PVE, but applying the tag inside the OS networking.
2. Standard Bridge - You can tag with the VM configuration inside of PVE. This is the same way that you can with OVS.

I came from OVS to Linux bridging and the bridge VLAN aware bit me and I couldn't figure out why my tagged nic inside of PVE wasn't working. I had to disable the VLAN-aware bridge and all traffic started working properly.

Bridge 0 is for management and I really don't need the VLAN aware bridge, it was just left there and since i don't run any VLANs on the standalone management interface it doesn't matter.

Bond0 and Bridge 1 are where all the VM's live.

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual

auto enp4s0
iface enp4s0 inet manual
mtu 9000
post-up ifconfig enp4s0 mtu 9000

auto enp4s0d1
iface enp4s0d1 inet manual
mtu 9000
post-up ifconfig enp4s0d1 mtu 9000

auto bond0
iface bond0 inet manual
bond slaves enp4s0 enp4s0d1
bond-miimon 100
bond mode 802.3ad
bond-xmit-hash-policy layer3 + 4
mtu 9000

auto vmbr0
iface vmbr0 inet static
address 10.X.X.21/24
gateway 10.X.X.1
bridge ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

auto vmbr1
iface vmbr1 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
mtu 9000

auto nfsv3000
iface nfsv3000 inet static
address 10.X.X.4/24
mtu 9000
vlan-id 3000
vlan-raw-device bond0

auto cephv3100
iface cephv3100 inet static
address 10.X.X.50/24
mtu 9000
vlan-id 3100
vlan-raw-device bond0

auto prxv4000
iface prxv4000 inet static
address 10.X.X.4/24
mtu 9000
vlan-id 4000
vlan-raw-device bond0
 
Last edited:
With PVE Linux bridges you have two scenarios that I will outline below.

1. Bridge Vlan aware - You're not tagging in the VM configuration inside of PVE, but applying the tag inside the OS networking.
2. Standard Bridge - You can tag with the VM configuration inside of PVE. This is the same way that you can with OVS.

I came from OVS to Linux bridging and the bridge VLAN aware bit me and I couldn't figure out why my tagged nic inside of PVE wasn't working. I had to disable the VLAN-aware bridge and all traffic started working properly.

Bridge 0 is for management and I really don't need the VLAN aware bridge, it was just left there and since i don't run any VLANs on the standalone management interface it doesn't matter.

Bond0 and Bridge 1 are where all the VM's live.

auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual

auto enp4s0
iface enp4s0 inet manual
mtu 9000
post-up ifconfig enp4s0 mtu 9000

auto enp4s0d1
iface enp4s0d1 inet manual
mtu 9000
post-up ifconfig enp4s0d1 mtu 9000

auto bond0
iface bond0 inet manual
bond slaves enp4s0 enp4s0d1
bond-miimon 100
bond mode 802.3ad
bond-xmit-hash-policy layer3 + 4
mtu 9000

auto vmbr0
iface vmbr0 inet static
address 10.X.X.21/24
gateway 10.X.X.1
bridge ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

auto vmbr1
iface vmbr1 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
mtu 9000

auto nfsv3000
iface nfsv3000 inet static
address 10.X.X.4/24
mtu 9000
vlan-id 3000
vlan-raw-device bond0

auto cephv3100
iface cephv3100 inet static
address 10.X.X.50/24
mtu 9000
vlan-id 3100
vlan-raw-device bond0

auto prxv4000
iface prxv4000 inet static
address 10.X.X.4/24
mtu 9000
vlan-id 4000
vlan-raw-device bond0
Thanks for your reply.
However I'm not sure if there's an inconsistency.
You say that you "[...] had to disable the VLAN-aware bridge and all traffic started working properly", but according to the config vmbr0 is vlan-aware.
Or is this your management network connection?

Why did you migrate from OVS to Linux bridging?
 
vmbr0 is my management interface and no VM assigned to that interface. vmbr1 is where all the VM's attach to.

I never had any issue with OVS, it performed fine, but I got a wild hair one day and decided that why have an extra level of complication to my setup. So I removed it.
But this is your configuration for vmbr0 including VLAN setup:
Code:
auto vmbr0
iface vmbr0 inet static
address 10.X.X.21/24
gateway 10.X.X.1
bridge ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

I assume your management network is configured on vmbr1.
This bridge includes a bond, that would make sense as redundancy for management network.

I'm just not sure if it makes sense to have network redundancy, realized with a 802.3ad LCAP bond, for the PVE guest network if you run critical services there, e.g. DNS (pihole), router+firewall (pfSense or OPNsense), etc.

What is you opinion on this?
 
I'm just not sure if it makes sense to have network redundancy, realized with a 802.3ad LCAP bond, for the PVE guest network if you run critical services there, e.g. DNS (pihole), router+firewall (pfSense or OPNsense), etc.

What is you opinion on this?
I think that is only useful for commercial usage where downtime really matters. If you only got 1 managed switch and only one PVE node, it is way more likely that the complete server or the switch goes down as they are way more complex (hardware and software) than just a simple NIC. And if that is the case a bonded NIC won't help you.
I'm personally not using bonds and I only got a single switch at home. If something is making troubles it is most likely one of the homeservers so I'm running the critical VMs (pihole, OPNsense in HA mode, PBS) on two servers. If one NIC or a complete servers goes down the other server will take its place within a second so atleast the network and internet is always working...or atleast as long as my single switch is working too.
 
Last edited:
Right... the setup depends on this question:
What is the single-point-of-failure?

In my case the network components, means:
- router
- (managed) switch

But if there are sufficient ports available on the switch and server (NIC), it could make sense to configure a bond with 2 cables.
Hereby I would get network cable redundancy (and it will not harm the setup).

So, if running HA setup for critical VMs like pihole, OPNsense and in addition two PVE nodes (cluster), it could make sense to use a bond for Management and Corosync network.

What is your opinion on this?

THX
 
Right... the setup depends on this question:
What is the single-point-of-failure?

In my case the network components, means:
- router
- (managed) switch

But if there are sufficient ports available on the switch and server (NIC), it could make sense to configure a bond with 2 cables.
Hereby I would get network cable redundancy (and it will not harm the setup).

So, if running HA setup for critical VMs like pihole, OPNsense and in addition two PVE nodes (cluster), it could make sense to use a bond for Management and Corosync network.

What is your opinion on this?

THX
Yes, but if you don't got pets or small childen it is very unlikely that you unplug or break a cable by accident because they all got some kind of retention mechanism (but the SFP+ one is very annoying if you got cats...it pulls out the cable if you pull on the loop that looks like a great cat toy ;)).

But a LACP bond would be very usefull fore more bandwidth or if you got two managed switches so each dual NIC could be pluged in into each switch so the switch isn't a single point of failure anymore.
 
Right... the throughput can be increased with LCAP.
Considering this is would make sense to define a bond for
PVE Guest Network
PVE Migration Network
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!