Odd "bridge link show" output with Bonding and VLANs

sstreet

New Member
Jul 31, 2024
2
0
1
First, the network does function, though eno1 has significantly more traffic than eno2; yet they are LACP layer 2+3 bonded - which the switch confirms as well. So I was trying to see if the I was just unlucky with the hash or if a problem exists.

bridge link show
Code:
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 master vmbr0 state forwarding priority 32 cost 100
6: veth100i0@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master vmbr0 state forwarding priority 32 cost 2
7: veth100i1@eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master vmbr0 state forwarding priority 32 cost 2
8: veth100i2@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master vmbr0 state forwarding priority 32 cost 2
9: veth101i0@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master vmbr0 state forwarding priority 32 cost 2
10: veth211i0@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master vmbr0 state forwarding priority 32 cost 2
11: veth200i0@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master vmbr0 state forwarding priority 32 cost 2
12: veth107i0@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master vmbr0 state forwarding priority 32 cost 2
13: veth202i0@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master vmbr0 state forwarding priority 32 cost 2
14: veth203i0@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master vmbr0 state forwarding priority 32 cost 2

This is where I think eno1 get loaded more then eno2. I would have expected all of these veth devs to be connected to bond0, no individual ethernet devices. Can anyone shed some light on this? Strange that veth100i2 is the only one linked to the bond0, where I would expect all of them to be linked to bond0. FYI: LXC100 is a virtualized router, thus the 3 interfaces. [wan,lan,dmz]


System Specs:
HPE Proliant DL360 G8, 2xE5-2630 v2, 64Gb RAM, Intel 540x2 10GbE
Proxmox 8.2.4
uname -a: Linux Kraken 6.8.8-3-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.8-3 (2024-07-16T16:16Z) x86_64 GNU/Linux
System is up-to-date, waiting for reboot window to get the 6.8.8-4-pve kernel booted.

/etc/network/interfaces
Code:
auto lo
iface lo inet loopback

auto eno2
iface eno2 inet manual
#10GbE Port 2

auto eno1
iface eno1 inet manual
#10GbE Port 1

auto bond0
iface bond0 inet manual
    bond-slaves eno1 eno2
    bond-miimon 100
    bond-mode 802.3ad
    bond-xmit-hash-policy layer2+3

auto vmbr0
iface vmbr0 inet manual
    bridge-ports bond0
    bridge-stp off
    bridge-fd 0
    bridge-vlan-aware yes
    address 192.168.1.253/24
    gateway 192.168.1.1
    bridge-vids 2-4094
#Dual 10GbE Bonded Connection

source /etc/network/interfaces.d/*

/proc/net/bonding/bond0
Code:
Ethernet Channel Bonding Driver: v6.8.8-3-pve

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2+3 (2)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

802.3ad info
LACP active: on
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 48:df:37:22:8c:5c
Active Aggregator Info:
    Aggregator ID: 1
    Number of ports: 2
    Actor Key: 15
    Partner Key: 1000
    Partner Mac Address: 00:23:79:00:30:67

Slave Interface: eno1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 48:df:37:22:8c:5c
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: 48:df:37:22:8c:5c
    port key: 15
    port priority: 255
    port number: 1
    port state: 61
details partner lacp pdu:
    system priority: 32768
    system mac address: 00:23:79:00:30:67
    oper key: 1000
    port priority: 32768
    port number: 5
    port state: 61

Slave Interface: eno2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 48:df:37:22:8c:5d
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: 48:df:37:22:8c:5c
    port key: 15
    port priority: 255
    port number: 2
    port state: 61
details partner lacp pdu:
    system priority: 32768
    system mac address: 00:23:79:00:30:67
    oper key: 1000
    port priority: 32768
    port number: 6
    port state: 61


LXC Network Configurations
grep -n ^net /etc/pve/lxc/*.conf
Code:
100.conf:30:net0: name=eth0,bridge=vmbr0,hwaddr=BC:24:11:D3:72:04,tag=10,type=veth
100.conf:31:net1: name=eth1,bridge=vmbr0,hwaddr=BC:24:11:78:69:04,tag=1,type=veth
100.conf:32:net2: name=eth2,bridge=vmbr0,hwaddr=BC:24:11:9D:34:C6,tag=20,type=veth
101.conf:7:net0: name=eth0,bridge=vmbr0,hwaddr=BC:24:11:8C:3B:18,ip=dhcp,ip6=auto,tag=1,type=veth
104.conf:6:net0: name=eth0,bridge=vmbr0,hwaddr=BC:24:11:B1:29:65,ip=dhcp,ip6=auto,tag=1,type=veth
105.conf:6:net0: name=eth0,bridge=vmbr0,hwaddr=BC:24:11:68:20:C6,ip=dhcp,ip6=auto,tag=1,type=veth
107.conf:20:net0: name=eth0,bridge=vmbr0,gw=192.168.1.1,hwaddr=BC:24:11:9A:8D:1A,ip=192.168.1.250/24,ip6=auto,tag=1,type=veth
108.conf:6:net0: name=eth0,bridge=vmbr0,hwaddr=BC:24:11:E9:70:63,ip=dhcp,ip6=auto,tag=1,type=veth
200.conf:12:net0: name=eth0,bridge=vmbr0,gw=192.168.2.1,hwaddr=BC:24:11:F7:EF:55,ip=192.168.2.11/24,ip6=auto,tag=20,type=veth
201.conf:27:net0: name=eth0,bridge=vmbr0,gw=192.168.2.1,hwaddr=BC:24:11:F9:CB:AB,ip=192.168.2.10/24,ip6=auto,tag=20,type=veth
202.conf:8:net0: name=eth0,bridge=vmbr0,gw=192.168.2.1,hwaddr=BC:24:11:86:D2:F7,ip=192.168.2.2/24,ip6=auto,tag=20,type=veth
203.conf:7:net0: name=eth0,bridge=vmbr0,gw=192.168.2.1,hwaddr=BC:24:11:31:63:57,ip=192.168.2.5/24,ip6=auto,tag=20,type=veth
210.conf:7:net0: name=eth0,bridge=vmbr0,gw=192.168.2.1,hwaddr=BC:24:11:5B:AF:E6,ip=192.168.2.20/24,ip6=auto,tag=20,type=veth
211.conf:7:net0: name=eth0,bridge=vmbr0,gw=192.168.2.1,hwaddr=BC:24:11:20:F5:97,ip=192.168.2.30/24,ip6=auto,tag=20,type=veth
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!