LACP with VLAN 802.1Q

popey

New Member
Jul 24, 2020
22
0
1
39
Cisco switch has configured two ethernet ports as Port-Channel with VLAN trunking.

I want to use VLAN=1 for mgmt network(also for proxmox GUI), 2 for windows, 3 for linux. Is my configuration correct?

Code:
iface eno1 inet manual
iface eno2 inet manual

auto bond0
iface bond0 inet manual
      slaves eno1 eno2
      bond-miimon 100
      bond-mode 802.3ad
      bond-xmit-hash-policy layer2+3

auto vmbr0.1
iface vmbr0.1 inet static
        address  10.10.10.2
        netmask  255.255.255.0
        gateway  10.10.10.1

auto vmbr0.2
iface vmbr0.2 inet static

auto vmbr0.3
iface vmbr0.3 inet static

auto vmbr0
iface vmbr0 inet manual
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
 
Here's my actual running config with using bonding for 802.3ad , 802.1Q VLAN 5 and Jumbo Frames. I had to apt install ifenslave but that may not be necessary now.

Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual
        mtu 9000

auto enxd03745bf33e4
iface enxd03745bf33e4 inet manual
        mtu 9000

auto enxd03745bf8620
iface enxd03745bf8620 inet manual
        mtu 9000

auto bond0
iface bond0 inet manual
        bond-slaves eno1 enxd03745bf33e4 enxd03745bf8620
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3

auto vmbr0
iface vmbr0 inet static
        address 10.0.1.11/16
        gateway 10.0.0.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        mtu 9000

auto vlan5
iface vlan5 inet static
        address 10.10.10.11/24
        mtu 9000
        vlan-raw-device bond0


And this is the result:

# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master bond0 state UP group default qlen 1000
link/ether 00:23:24:94:9e:6c brd ff:ff:ff:ff:ff:ff
3: enxd03745bf33e4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master bond0 state UP group default qlen 1000
link/ether 00:23:24:94:9e:6c brd ff:ff:ff:ff:ff:ff
4: enxd03745bf8620: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast master bond0 state UP group default qlen 1000
link/ether 00:23:24:94:9e:6c brd ff:ff:ff:ff:ff:ff
5: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 00:23:24:94:9e:6c brd ff:ff:ff:ff:ff:ff
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether 00:23:24:94:9e:6c brd ff:ff:ff:ff:ff:ff
inet 10.0.1.11/16 brd 10.0.255.255 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::223:24ff:fe94:9e6c/64 scope link
valid_lft forever preferred_lft forever
7: vlan5@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
link/ether 00:23:24:94:9e:6c brd ff:ff:ff:ff:ff:ff
inet 10.10.10.11/24 brd 10.10.10.255 scope global vlan5
valid_lft forever preferred_lft forever
inet6 fe80::223:24ff:fe94:9e6c/64 scope link
valid_lft forever preferred_lft forever
 
Last edited:
I should also mention that the below configuration used to work but doesn't anymore. I don't have the faintest clue why either. This morning my cluster nodes weren't seeing each other and the vlan detail above is what fixed it. The only hint I noticed was in the gui the vlan parameters where the example of interfaceXY changed to interfaceX.1 so I am guessing there was an update but I'm not sure.

Code:
...

auto bond0
iface bond0 inet manual
        bond-slaves eno1 enxd03745bf33e4 enxd03745bf8620
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3

auto bond0.5
iface bond0.5 inet static
        address 10.10.10.11/24
        mtu 9000
        vlan-id 5

auto vmbr0
iface vmbr0 inet static
        address 10.0.1.11/16
        gateway 10.0.0.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
        mtu 9000
 
Last edited:
As I understand this would be your IP mgmt address to proxmox gui, right?

Code:
address 10.0.1.11/16

What about this one?

Code:
address 10.10.10.11/24
 
The 10.10.10.0/24 subnet is used for cluster network traffic between proxmox nodes. It's what I do to isolate it from front end traffic on the 10.0.0.0/16 subnet.

The proxmox gui is front facing so I use https://10.0.1.11:8006 to access the gui.

I have also used additional vlans within proxmox to create isolation of traffic on the front end as well. For example, I have a firewall VM from Sophos that has 2 vnics in it. One is setup for the regular front end bridge network 10.0.0.1 and one vnic is setup for vlan 10. I also added my cable modem to VLAN 10 so I could route all my internet traffic through the firewall VM. This way, I get a firewall that has proxmox HA capabilities with online migration and such, while still isolating the traffic as you would expect. Good stuff. And SOOO much easier than ESXi vswitch/dswitch/whatever crazy town.
 
Last edited:
I found it was the installation of ifupdown2 that caused grief. I only installed it since the GUI suggested it is needed to apply networking configurations. After apt remove ifupdown2 and verifying ifenslave is already installed the vlan directives work.

I believe this is what you were looking for:
Code:
iface eno1 inet manual
iface eno2 inet manual

auto bond0
iface bond0 inet manual
      bond-slaves eno1 eno2
      bond-miimon 100
      bond-mode 802.3ad
      bond-xmit-hash-policy layer2+3

# General Traffic with VLAN 1 untagged (typically)
auto vmbr0
iface vmbr0 inet static
        address 10.10.10.2/24
        gateway 10.10.10.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
# or
# explicit VLAN 1
#auto vlan1
#iface vlan1 inet static
#        address 10.10.10.2/24
#        gateway 10.10.10.1
#        vlan-raw-device bond0

auto vlan2
iface vlan2 inet static
        address {IP/CIDR for vlan2}
        vlan-raw-device bond0

auto vlan3
iface vlan3 inet static
        address {IP/CIDR for vlan3}
        vlan-raw-device bond0

Hope that helps and Good Luck!
 
I found it was the installation of ifupdown2 that caused grief. I only installed it since the GUI suggested it is needed to apply networking configurations. After apt remove ifupdown2 and verifying ifenslave is already installed the vlan directives work.

I believe this is what you were looking for:
Code:
iface eno1 inet manual
iface eno2 inet manual

auto bond0
iface bond0 inet manual
      bond-slaves eno1 eno2
      bond-miimon 100
      bond-mode 802.3ad
      bond-xmit-hash-policy layer2+3

# General Traffic with VLAN 1 untagged (typically)
auto vmbr0
iface vmbr0 inet static
        address 10.10.10.2/24
        gateway 10.10.10.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0
# or
# explicit VLAN 1
#auto vlan1
#iface vlan1 inet static
#        address 10.10.10.2/24
#        gateway 10.10.10.1
#        vlan-raw-device bond0

auto vlan2
iface vlan2 inet static
        address {IP/CIDR for vlan2}
        vlan-raw-device bond0

auto vlan3
iface vlan3 inet static
        address {IP/CIDR for vlan3}
        vlan-raw-device bond0

Hope that helps and Good Luck!
 
Code:
...


auto bond0.5
iface bond0.5 inet static
        address 10.10.10.11/24
        mtu 9000
        vlan-id 5
Hi,

I think they are a bug in gui recently, as we should have "vlan-id 5" if the interface is named "bond0.5".
I need to check, but I think it'll do double vlan tag. 'bond0.5.5".
vlan-id is a ifupdown2 syntax, so rollback to ifupdown1 could fix it, as it don't parse it.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!