LAG for VLAN trunk not working??!

MrPete

Active Member
Aug 6, 2021
108
56
33
66
Having a spare set of 1GBe ports and a switch that supports LACP (GS748Tv5), I thought I'd bump the performance of my VLAN trunk.

What I have had fully working for over a year:

(switch) <---> (ProxMox Host enXXX physical) -> (Host vmbr2, VLAN-aware) -> pfSense VM

For a different purpose, I successfully created a bonded link between the switch and a NAS... so I'm confident the switch is working ok.

For this, I created bond0 with a couple of slaves, and vmbr2 now owns the bond. Seemed it ought to work... but did not. The VM could not see anything through the trunk, and nothing could see the VM. Pulled one slave out, and made that the child of vmbr2... all is well again.

Any hints on diagnosing this?

Thanks!
 
Sorry for the delay. Real Life intruded ;)

Two cases: first, my almost-bonded trunk, working just fine. Then, the fully bonded trunk, which doesn't work. I'll post the first for now. Can't interrupt a huge backup process until tomorrow am...

Context Notes:
vmbr0 is my host access
vmbr1 is used in pfSense VM for WAN
vmbr3 is my corosync
vmbr2 is the interesting one. Trunked VLAN. VLAN-aware so some VM's can access a variety of VLANs using a vlan tag in the config. The host doesn't need to however

ALMOST BONDED

/etc/network/interfaces
Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto ens8191f0
iface ens8191f0 inet manual

auto ens8191f1
iface ens8191f1 inet manual

auto ens8191f2
iface ens8191f2 inet manual

auto ens8191f3
iface ens8191f3 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves ens8191f2
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer3+4
#LAN LAG

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.236/24
        gateway 192.168.1.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
#pve1a host

auto vmbr1
iface vmbr1 inet manual
        bridge-ports ens8191f3
        bridge-stp off
        bridge-fd 0
#WAN

auto vmbr2
iface vmbr2 inet manual
        bridge-ports ens8191f1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#LAN LAG

auto vmbr3
iface vmbr3 inet static
        address 10.77.77.3/24
        bridge-ports ens8191f0
        bridge-stp off
        bridge-fd 0
#cnode 3 (PVE cluster)

/proc/net/bonding/bond0
Code:
Ethernet Channel Bonding Driver: v6.5.13-1-pve

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

802.3ad info
LACP active: on
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: a0:36:9f:04:ae:96
Active Aggregator Info:
        Aggregator ID: 1
        Number of ports: 1
        Actor Key: 9
        Partner Key: 53
        Partner Mac Address: 10:da:43:f7:e6:65

Slave Interface: ens8191f2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: a0:36:9f:04:ae:96
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: a0:36:9f:04:ae:96
    port key: 9
    port priority: 255
    port number: 1
    port state: 61
details partner lacp pdu:
    system priority: 32768
    system mac address: 10:da:43:f7:e6:65
    oper key: 53
    port priority: 128
    port number: 11
    port state: 61
 
Hmmm...
  • Proxmox AND the switch think the LAG is working fine in every case
  • However, data is not being passed in both directions (I see ARP requests from the switch side of the trunk at the host but that's about it
  • Even w/o the bond I think I have things to learn:
    • vmbr2 is defined as VLAN-aware; the trunk is working great (among other things, 300kB/sec of video is on two VLANs...)
    • yet the host sees almost no traffic at all. A little more monitoring the hardware NIC than the (vmbr2) bridge
  • When I build the bond, nothing works at all
    • At the host I see arp requests from the switch, and even inside the VM, but nothing in the other direction
I am wondering if MAC address wierdness could be causing my issues. I see quite a lot of Q&A and bug report traffic in that arena.

My current questions:
  • Is there a way to see how the (kernel?) has configured the connection between vmbrX and the virtio NIC on a specific VM?
  • if network config changes at the host level, how much resetting/rebooting/reconfig is needed at the VM level?
  • The following documents MAC changes I observe as I go through reconfiguring the bond and vmbr2...

ALL of the following document the last MAC octet. I don't know enough to discern which of these are ok, and which are busted.
In EVERY case, All is well as long as vmbr2 doesn't point to the bond.


BOND running, having added second NIC

(NOTE the MAC mismatch for bond0 and f2! )

/proc/net/bonding/bond0
bond0...
system mac addr ...96

slave f2
perm hw addr ...96
system mac addr ...96

slave f1
perm hw addr ...95
system mac addr ...96

ip a
...f1 ...:95
...f2 ...:95
bond0 ...:95
vmbr2 ...:95

F1 REMOVED from bond0 (vmbr2 -> F1)

/proc/net/bonding/bond0
bond0...
system mac addr ...96

slave f2
perm hw addr ...96
system mac addr ...96

ip a
...f1 ...:95
...f2 ...:96
bond0 ...:96
vmbr2 ...:95

Swap so F2 REMOVED from bond0 (vmbr2 -> F2)
(MAC mismatch for bond0)

/proc/net/bonding/bond0
bond0...
system mac addr ...96

slave f1
perm hw addr ...95
system mac addr ...96

ip a
...f1 ...:95
...f2 ...:96
bond0 ...:95
vmbr2 ...:96

REBUILD BOND0 from SCRATCH with F1 (vmbr2 -> F2)

/proc/net/bonding/bond0
bond0...
system mac addr ...95

slave f1
perm hw addr ...95
system mac addr ...95

ip a
...f1 ...:95
...f2 ...:96
bond0 ...:95
vmbr2 ...:96

ADD F2 to BOND0 (vmbr2 -> bond0)

/proc/net/bonding/bond0
bond0...
system mac addr ...95

slave f1
perm hw addr ...95
system mac addr ...95

slave f2
perm hw addr ...96
system mac addr ...95

ip a
...f1 ...:95
...f2 ...:96
bond0 ...:95
vmbr2 ...:95
 
Without reading much into all the data, I think you are chasing red herrings with mac addresses:
https://wiki.linuxfoundation.org/ne...oes_a_bonding_device_get_its_mac_address_from

Could be! THANK YOU for that link. Even though much of what is on that page does not apply to ProxMox (we use ifupdown2 for one, and it has nothing about /etc/network/interfaces, and...) ... much DOES apply.

I'm going to simplify. Apparently, having an embedded VLAN trunk that is not used by the host, is a somewhat unusual configuration. Now I'm thinking I will:
  • Change to hash policy layer2 for now (they describe some limiting factors on the higher level policies)
  • Craft a tagged VLAN port on the host just to ensure all is properly set up. Good for debugging too ;)
 
Even though much of what is on that page does not apply to ProxMox (we use ifupdown2 for one, and it has nothing about /etc/network/interfaces, and...)
You should treat Proxmox as a Debian based appliance. The networking is standard Linux stuff. While there are flavor and version variations of configuration files and which daemon manages them, those usually stop before kernel level behavior.

And yes, good idea to simplify your configuration to ensure that basics work. Its unlikely that your configuration is so unique that you uncovered a Kernel bug in the Bonding driver.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
(UPDATE: actually I had a configuration error. ProxMox can largely work around my error if I tell it about my PVID... but not completely. I have edited this reply to reflect reality...)

I FOUND IT!!!

Got remote SSH Wireshark running and stared for a while. I finally realized: when the bridge points at a bond instead of a simple ethernet, the client VM was receiving ARP requests on various VLANs over the trunk, but NONE of them had VLAN (.802.1Q) tags. (That's because I forgot to turn on VLAN tagging specifically in the LAG in the switch. Sigh. )

I consider this a bug, probably in Linux but who knows: in the context of VLAN trunk managed by VM client, ProxMox treats a bridged ethernet and a bridged bond differently.
"Simple" repeatable test:

  • Set up bond0 with a single LACP ethernet link connected to an LACP-configured switch (I'm using slow, layer2 hash, but that doesn't matter.)
  • Set up vmbrX as a VLAN-aware bridge containing a single ethernet link (can be the other side of the LACP at the switch end; doesn't matter since not LACP in ProxMox.)
  • Configure a VM client with vmbrX as VLAN trunk to the switch. (I'm using pfSense.)
That's the initial setup.
1) In the client, tcpdump capture vmbrX (the whole trunk) filtering for arp packets. Need tcpdump -nveli vmbrX vlan and arp to capture because tcpdump is wierd about vlan :) -- this works fine.
2) Now, change vmbrX to point to bond0.

  • The above capture won't work
  • tcpdump -nvei vmbrX arp does work. There are no VLAN tags in any packets from any VLAN.
My test was more stringent: live wireshark via ssh to the client. Live change from working to nonworking bridge. Packets suddenly have no VLAN tags anymore.
BUG: when setting up a bridged-bond VLAN trunk fully managed by a VM client,

ProxMox can add VLAN tags to incoming data given some info about PVID (bridge-pvid NN)... but that's not enough to fix all situations. Specifically, when a client device requests an IP via DHCP, there are no clues about its VLAN if not tagged!

iface vmbr2 inet manual
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
bridge-pvid 71
  • Must be VLAN-aware (available in GUI)
  • Must have a range of VID's (default is fine)
  • Must have a PVID defined -- IF you will be sending/receiving data on that VLAN from ProxMox or from a VM that doesn't know about VLANs.
(I also learned: Netgear 'smart' managed switches only support layer 2 LACP.)

Bottom Line
  • WireShark is your friend; so is SSH remote-live wireshark.
  • Be careful to properly define VLANs and tags everywhere (switch, host)
  • Be careful with the hash algorithm: not all switches support them all
  • If doing a VLAN trunk, bonded, I don't think you can also attach a specific tagged VLAN to the bond directly at least in ProxMox 8. I only have that working via a VLAN attached to the vmbrN
  • Oh, and when messing with the hardware, remember your interface names can easily change on you. :(
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!