How to: bond+bridge+vlan in Proxmox 4.x and share a VLAN with your VM guests

x307

New Member
May 22, 2016
11
9
3
This configuration (perhaps you'll call it a work-around) took me a while to sort out, so hopefully it will save you some time.

The problem:
When you bond 2 interfaces and then want to make a "vmbr" (for example vmbr0) over them, you'll find that the moment a VM starts with the same vlan tag as your proxmox machine, bad things will happen. The VM will not start, or your proxmox machine will loose connection. This is because Linux bridges don't play nice with VLANs.

Newer versions of the Linux kernel support a "bridge-vlan-aware yes" option, which allows the bridge to pass VLANs properly. This means you can have one vmbr0 bridge for all your VMs to share, and still be able to specify the "vlan tag" in the proxmox GUI. In my experience this works great until a VM is started using the same tag as your proxmox machine (in my case that is VLAN 5), as described above. The workaround I've come up with is to specify a sub-interface with a high offset, such as "vmbr0.5:256". In case it's not obvious to some, this translates to "use vmbr0, on vlan 5, on sub-interface identified by 256).

Note: the idea of sharing a VLAN for proxmox with your VM guests may not be the best idea for everyone. If you can have your hypervisors on the their own VLAN, then you won't encounter the issue I have described here.

Hopefully someone will find this useful! If you do, please let me know :D


root@kvm1:~# cat /etc/network/interfaces

# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

iface eth2 inet manual

iface eth3 inet manual

iface eth91 inet manual

iface eth92 inet manual

auto bond0
iface bond0 inet manual
slaves eth0 eth1 eth2 eth3
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2+3​

auto vmbr0
iface vmbr0 inet manual
bridge_ports bond0
bridge_stp off
bridge_fd 0
bridge-vlan-aware yes​

auto vmbr0.5:256
iface vmbr0.5:256 inet static
address 10.1.5.101
netmask 255.255.255.0
gateway 10.1.5.1
 
Last edited:

x307

New Member
May 22, 2016
11
9
3
Hi. Maybe I've got the same problem ... but can you describe what you mean by
"until a VM is started using the same tag as your proxmox machine"
What I mean is: in a scenario where your proxmox node is using the same vlan tag as a VM. Once the VM boots up, it seems to "steal" the vlan from the proxmox host. Using the method above with the predefined sub-interface (vmbr0.5:256) you can avoid this.
 

valeech

Member
May 4, 2016
39
5
13
43
Matt,

Thank you so much for this post! I ran into this issue recently and this was exactly the fix I needed.
 
  • Like
Reactions: x307

42n4

New Member
Jun 28, 2016
3
1
3
47
Yes, it is really a good clue for VLANs in vlan aware bridges. There is no documentation about it in the proxmox wiki. Thank you very much!
It is the very good setting for virtual proxmox machines (proxmox vms in proxmox) for ceph monitors, they are on the same vlan.
 
Last edited:
  • Like
Reactions: x307

gdi2k

Member
Aug 13, 2016
80
1
13
I'm also trying to get this working, but I can't get the Proxmox host talking to the VMs on the VLAN (the VMs can talk to each other on the same VLAN just fine though).

In my case, I want to have the proxmox server on a Management VLAN with VLAN ID 6. On the server I have this:
Code:
auto lo
iface lo inet loopback

iface enp4s0 inet manual

iface enp3s0 inet manual

iface enp1s0 inet manual

iface enp1s0d1 inet manual

auto bond0
iface bond0 inet manual
        slaves enp1s0 enp1s0d1
        bond_miimon 100
        bond_mode 802.3ad
        bond_xmit_hash_policy layer2

auto vmbr0
iface vmbr0 inet static
        address  10.32.0.4
        netmask  255.255.240.0
        gateway  10.32.0.20
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0
        bridge_vlan_aware yes


auto vmbr0.6:256
iface vmbr0.6:256 inet static
address 10.32.24.10
netmask 255.255.248.0
gateway 10.32.24.20
On the Guests I have:

Code:
# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

auto ens18
iface ens18 inet static
address  10.32.0.70
netmask  255.255.240.0
gateway  10.32.0.20


auto ens18.6
iface ens18.6 inet static
address  10.32.24.70
netmask  255.255.240.0
gateway  10.32.24.20
Not sure where I'm going wrong. Any ideas?
 

gdi2k

Member
Aug 13, 2016
80
1
13
bond_xmit_hash_policy just sets how the slave device is selected in a LACP link as far as I know. I changed it to layer2+3 as in the OP's first post just in case, but it makes no difference.
 

x307

New Member
May 22, 2016
11
9
3
Are you using Proxmox 4 or 5?

Also, are you specifying the VLAN tag in the Proxmox GUI for each guest? If you are, then you should not have the ".6" on the end of each interface (on the guests)
 

gdi2k

Member
Aug 13, 2016
80
1
13
This is on Proxmox 5.

I wasn't specifying the VLAN tag on the Proxmox GUI for the VMs, but I have just tried that too (after disabling VLAN stuff on the VM). The result is the same - I can ping other VMs on VLAN 6, but not the Proxmox host (and vice versa).
 

meichthys

New Member
Sep 25, 2019
1
0
1
30
@x307 Thank you so much! I was banging my head agains this issue for a few weeks. This should definitely be better documented, but for now i've saved this thread in the wayback machine for future reference!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!