LACP with vlan-aware

popey

New Member
Jul 24, 2020
22
0
1
39
Below configuration didn't work for me.

What I wanted to achieve was set up management ip on vmbr0, so I could access GUI through 10.162.242.85. After the reboot, when I wanted to ping gateway I got network unreachable.

Where is the problem?

Code:
auto lo
iface lo inet loopback

iface eno1 inet manual
iface eno2 inet manual

auto bond0
iface bond0 inet manual
    bond-slaves eno1 eno2
    bond-miimon 100
    bond-mode 802.3ad
    bond-xmit-hash-policy layer2+3
   
auto vmbr0
iface vmbr0 inet static
    address 10.162.242.85
    netmask 255.255.255.0
    gateway 10.162.242.254
    bridge-pvid 242
    bridge-ports bond0
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
 
Last edited:
So, you want default vlan to 242 ?
if yes, do you have installed ifupdown2 package ? (because bridge-pvid syntax is only working with ifupdown2)

another way, without changing the default vlan:


Code:
auto vmbr0
iface vmbr0 inet manual
    bridge-ports bond0
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes

auto vmbr0.242
iface vmbr0.242 inet static
    address 10.162.242.85
    netmask 255.255.255.0
    gateway 10.162.242.254

This will keep default vlan=1, but use vlan 242 for your proxmox ip
 
I can't help with this way of networking, but bonding+vlan works fine to me with OpenVSwith installed.

4xGigEth bound together, connected to 2 of stacked cisco SW, support for LACP over different units in stack, VLAN tagging configured.

After that, I can set any VLAN ID on any LXC or VM interfaces and it will work as untagged adaptor, conencted to VLAN. Or I can keep it empty and have deal with any VLAN tag inside VM by guest system. It's no use to provide trunk to containres because PVE controls all the network configuration and it isalmost impossible to build VLAN sub-interfaces inside.

All the above let me do not think about network config: the same way on all nodes, so once configured, VM or LXC work same way on any node.

And finallly, I use OpenVSwitch just because 3 years ago all the Debian network stff for this case just did not work for me. OpenVSwitch was up fast and easy.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!