Network problem bond+vlan+bridge

erik.deneve

Member
Mar 16, 2020
11
2
8
39
We are using 2 bond interfaces for storage (2x10Gb) and a trunk for the VMs (2x10Gb).
We want to split the trunk for the VM in VLAN bridges so we can assign them to our VMs.

Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet static
    address XXX.XXX.XXX.XXX/XX
    gateway XXX.XXX.XXX.XXX
#node1-mgt

iface eno2 inet manual
#Cluster network

iface eno3 inet manual

iface eno4 inet manual

iface ens3f0np0 inet manual

iface ens3f1np1 inet manual

iface ens2f0np0 inet manual

iface ens2f1np1 inet manual

auto bond0
iface bond0 inet manual
    bond-slaves ens2f0np0 ens3f0np0
    bond-miimon 100
    bond-mode 802.3ad
    bond-xmit-hash-policy layer2+3
    bond-min-links 1
    mtu 8970
#VM network  (trunk)

auto bond1
iface bond1 inet static
    address xxx.xxx.xxx.xxx/24
    bond-slaves ens2f1np1 ens3f1np1
    bond-miimon 100
    bond-mode 802.3ad
    bond-xmit-hash-policy layer2+3
    bond-min-links 1
    mtu 8970
#Ceph network

auto vmbr2
iface vmbr2 inet manual
    bridge-ports bond0.2
    bridge-stp off
    bridge-fd 0
    mtu 1500
#vlan2

if we reload the network config we got an error:
Code:
# ifreload -a  -v
info: requesting link dump
info: requesting address dump
info: requesting netconf dump
info: loading builtin modules from ['/usr/share/ifupdown2/addons']
info: module openvswitch not loaded (module init failed: no /usr/bin/ovs-vsctl found)
info: module openvswitch_port not loaded (module init failed: no /usr/bin/ovs-vsctl found)
info: module ppp not loaded (module init failed: no /usr/bin/pon found)
info: module batman_adv not loaded (module init failed: no /usr/sbin/batctl found)
info: executing /sbin/sysctl net.bridge.bridge-allow-multiple-vlans
info: module mstpctl not loaded (module init failed: no /sbin/mstpctl found)
info: executing /bin/ip rule show
info: executing /bin/ip -6 rule show
info: address: using default mtu 1500
info: address: max_mtu undefined
info: executing /bin/ip addr help
info: address metric support: OK
info: module ppp not loaded (module init failed: no /usr/bin/pon found)
info: module mstpctl not loaded (module init failed: no /sbin/mstpctl found)
info: module batman_adv not loaded (module init failed: no /usr/sbin/batctl found)
info: module openvswitch_port not loaded (module init failed: no /usr/bin/ovs-vsctl found)
info: module openvswitch not loaded (module init failed: no /usr/bin/ovs-vsctl found)
info: looking for user scripts under /etc/network
info: loading scripts under /etc/network/if-pre-up.d ...
info: loading scripts under /etc/network/if-up.d ...
info: loading scripts under /etc/network/if-post-up.d ...
info: loading scripts under /etc/network/if-pre-down.d ...
info: loading scripts under /etc/network/if-down.d ...
info: loading scripts under /etc/network/if-post-down.d ...
info: 'link_master_slave' is set. slave admin state changes will be delayed till the masters admin state change.
info: processing interfaces file /etc/network/interfaces
info: reload: scheduling down on interfaces: ['vmbr4']
info: vmbr4: running ops ...
info: vmbr4: netlink: ip link set dev vmbr4 down
info: executing /etc/network/if-down.d/postfix
info: vmbr4: netlink: ip link del vmbr4
info: reload: scheduling up on interfaces: ['lo', 'eno1', 'bond1', 'bond0', 'vmbr2']
info: ens3f1np1: running ops ...
info: vrf: syncing table map to /etc/iproute2/rt_tables.d/ifupdown2_vrf_map.conf
info: vrf: dumping iproute2_vrf_map
info: {}
info: executing /sbin/sysctl net.mpls.conf.ens3f1np1.input=0
info: executing /etc/network/if-up.d/postfix
info: ens2f1np1: running ops ...
info: executing /sbin/sysctl net.mpls.conf.ens2f1np1.input=0
info: executing /etc/network/if-up.d/postfix
info: bond1: running ops ...
warning: bond1: attribute bond-min-links is set to '0'
info: bond1: already exists, no change detected
info: executing /sbin/sysctl net.mpls.conf.bond1.input=0
info: executing /etc/network/if-up.d/postfix
info: lo: running ops ...
info: executing /sbin/sysctl net.mpls.conf.lo.input=0
info: executing /etc/network/if-up.d/postfix
info: ens3f0np0: running ops ...
info: executing /sbin/sysctl net.mpls.conf.ens3f0np0.input=0
info: executing /etc/network/if-up.d/postfix
info: ens2f0np0: running ops ...
info: executing /sbin/sysctl net.mpls.conf.ens2f0np0.input=0
info: executing /etc/network/if-up.d/postfix
info: bond0: running ops ...
warning: bond0: attribute bond-min-links is set to '0'
info: bond0: already exists, no change detected
info: executing /sbin/sysctl net.mpls.conf.bond0.input=0
info: executing /etc/network/if-up.d/postfix
info: bond0.2: running ops ...
info: bond0.2: not enslaved to bridge vmbr2: ignored for now
info: executing /sbin/sysctl net.mpls.conf.bond0/2.input=0
info: executing /etc/network/if-up.d/postfix
info: vmbr2: running ops ...
info: vmbr2: bridge already exists
info: vmbr2: applying bridge settings
info: vmbr2: reset bridge-hashel to default: 4
info: reading '/sys/class/net/vmbr2/bridge/stp_state'
info: vmbr2: netlink: ip link set dev vmbr2 type bridge (with attributes)
info: writing '1' to file /proc/sys/net/ipv6/conf/bond0.2/disable_ipv6
info: executing /bin/ip -force -batch - [link set dev bond0.2 master vmbr2]
warning: vmbr2: apply bridge ports settings: cmd '/bin/ip -force -batch - [link set dev bond0.2 master vmbr2]' failed: returned 1 (RTNETLINK answers: No data available
Command failed -:1
)
info: executing /sbin/sysctl net.mpls.conf.vmbr2.input=0
info: vmbr2: bridge inherits mtu from its ports. There is no need to assign mtu on a bridge
info: executing /etc/network/if-up.d/postfix
info: eno1: running ops ...
info: executing /sbin/sysctl net.mpls.conf.eno1.input=0
info: eno1: netlink: ip addr del XXXX/64 dev eno1
info: executing /bin/ip route add default via XXX.XXX.XXX.XXX proto kernel dev eno1 onlink
info: executing /etc/network/if-up.d/postfix

Some other info:
Our 10Gb NIC: Broadcom BCM57412 NetXtreme-E 10Gb

Code:
# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-2
pve-kernel-helper: 6.2-2
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 2.0.1-1+pve8
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-6
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

Code:
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: bc:97:e1:76:5b:30
Active Aggregator Info:
    Aggregator ID: 1
    Number of ports: 2
    Actor Key: 15
    Partner Key: 24
    Partner Mac Address: 64:64:9b:54:f5:00

Slave Interface: ens3f0np0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: bc:97:e1:76:5b:30
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: bc:97:e1:76:5b:30
    port key: 15
    port priority: 255
    port number: 1
    port state: 61
details partner lacp pdu:
    system priority: 127
    system mac address: 64:64:9b:54:f5:00
    oper key: 24
    port priority: 127
    port number: 67
    port state: 63

Slave Interface: ens2f0np0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: bc:97:e1:76:c5:80
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: bc:97:e1:76:5b:30
    port key: 15
    port priority: 255
    port number: 2
    port state: 61
details partner lacp pdu:
    system priority: 127
    system mac address: 64:64:9b:54:f5:00
    oper key: 24
    port priority: 127
    port number: 72
    port state: 63

Code:
#ip a s
...
11: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 8970 qdisc noqueue state UP group default qlen 1000
    link/ether bc:97:e1:76:5b:30 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::be97:e1ff:fe76:5b30/64 scope link 
       valid_lft forever preferred_lft forever
12: bond0.2@bond0: <BROADCAST,MULTICAST> mtu 8970 qdisc noop state DOWN group default qlen 1000
    link/ether bc:97:e1:76:5b:30 brd ff:ff:ff:ff:ff:ff
13: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether fa:08:b6:20:e1:e2 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::f808:b6ff:fe20:e1e2/64 scope link 
       valid_lft forever preferred_lft forever

Code:
# brctl show
bridge name    bridge id        STP enabled    interfaces
vmbr2        8000.000000000000    no


Any idea what is going wrong here?
Thanks!
Erik
 
That's strange, config is fine, it's working here for me.

can you send the result of :

"ip link set dev bond0.2 master vmbr2"

does it work with

"brctl addif vmbr2 bond0.2" ?
 
Last edited:
That's strange, config is fine, it's working here for me.

can you send the result of :

"ip link set dev bond0.2 master vmbr2"

does it work with

"brctl addif vmbr2 bond0.2" ?

Code:
# ip link set dev bond0.2 master vmbr2
RTNETLINK answers: No data available

Code:
# brctl addif vmbr2 bond0.2
can't add bond0.2 to bridge vmbr2: No data available
 
very strange, I never see this ...

do you have tried to reboot ?
Yes multiple times. Tried everything.
Seems combination of NetXtreme-E /bonding and vlan.
Because without bonding it works out of the box (trunk directly on ens3f0np0 for example).
 
Hello,

Some other thing we've seen:

If we use the bond-slaves on the same network cards, it works.
Before we used the LACP between 2 ports on 2 different networks cards in the server.

Someone have a idea why?

Kind regards,
Erik
 
Hello,

Some other thing we've seen:

If we use the bond-slaves on the same network cards, it works.
Before we used the LACP between 2 ports on 2 different networks cards in the server.

Someone have a idea why?

Kind regards,
Erik

I never seen this kind of problem before..
Differents cards shouldn't be a problem.

do you have only the problem when trying to add a vlan tagged bond "bond0.2" to the vmbr2 ?
or does it works if you simply put "bond0" in vmbr2 ?

(I would like to known if it's a bonding bug, or a vlan bug)

I wonder if it couldn't be vlan offloading supported on 1 nic, and vlan offloading not supported on other nic model. maybe this could give problem in the bond tagging.

can you send result of

"ethtool -k ens2f0np0" ,
"ethtool -k ens3f0np0", ....

?
 
could be also interesting to create the "bond0.2" with

Code:
auto bond0.2
iface bond0.2 inet manual
    vlan-raw-device bond0

and try "ifup bond0.2" to see if the bond0.2 is correctly created (but It seem to be already ok from your "ip addr")
 
Hello,

Thanks for the answers.

It's very strange:
What work is: if I reboot the server with only one port in the bond (it works), the bridges come up (and all is working fine),
After that, I can insert the second port (on the other network card) in the bond (it keeps working).
Although after that I've the same problem when I create new bridges, and when I reboot the node, the bridges will not come up.

Code:
# ethtool ens2f0np0
Settings for ens2f0np0:
    Supported ports: [ FIBRE ]
    Supported link modes:   1000baseT/Full
                            10000baseT/Full
    Supported pause frame use: Symmetric Receive-only
    Supports auto-negotiation: Yes
    Supported FEC modes: Not reported
    Advertised link modes:  1000baseT/Full
                            10000baseT/Full
    Advertised pause frame use: No
    Advertised auto-negotiation: Yes
    Advertised FEC modes: Not reported
    Speed: 10000Mb/s
    Duplex: Full
    Port: Direct Attach Copper
    PHYAD: 1
    Transceiver: internal
    Auto-negotiation: on
    Supports Wake-on: d
    Wake-on: d
    Current message level: 0x00000000 (0)
                  
    Link detected: yes

# ethtool ens2f1np1
Settings for ens2f1np1:
    Supported ports: [ FIBRE ]
    Supported link modes:   1000baseT/Full
                            10000baseT/Full
    Supported pause frame use: Symmetric Receive-only
    Supports auto-negotiation: Yes
    Supported FEC modes: Not reported
    Advertised link modes:  1000baseT/Full
                            10000baseT/Full
    Advertised pause frame use: No
    Advertised auto-negotiation: Yes
    Advertised FEC modes: Not reported
    Speed: 10000Mb/s
    Duplex: Full
    Port: Direct Attach Copper
    PHYAD: 1
    Transceiver: internal
    Auto-negotiation: on
    Supports Wake-on: d
    Wake-on: d
    Current message level: 0x00000000 (0)
                  
    Link detected: yes

# ethtool ens3f0np0
Settings for ens3f0np0:
    Supported ports: [ FIBRE ]
    Supported link modes:   1000baseT/Full
                            10000baseT/Full
    Supported pause frame use: Symmetric Receive-only
    Supports auto-negotiation: Yes
    Supported FEC modes: Not reported
    Advertised link modes:  1000baseT/Full
                            10000baseT/Full
    Advertised pause frame use: No
    Advertised auto-negotiation: Yes
    Advertised FEC modes: Not reported
    Speed: 10000Mb/s
    Duplex: Full
    Port: Direct Attach Copper
    PHYAD: 1
    Transceiver: internal
    Auto-negotiation: on
    Supports Wake-on: d
    Wake-on: d
    Current message level: 0x00000000 (0)
                  
    Link detected: yes

# ethtool ens3f1np1
Settings for ens3f1np1:
    Supported ports: [ FIBRE ]
    Supported link modes:   1000baseT/Full
                            10000baseT/Full
    Supported pause frame use: Symmetric Receive-only
    Supports auto-negotiation: Yes
    Supported FEC modes: Not reported
    Advertised link modes:  1000baseT/Full
                            10000baseT/Full
    Advertised pause frame use: No
    Advertised auto-negotiation: Yes
    Advertised FEC modes: Not reported
    Speed: 10000Mb/s
    Duplex: Full
    Port: Direct Attach Copper
    PHYAD: 1
    Transceiver: internal
    Auto-negotiation: on
    Supports Wake-on: d
    Wake-on: d
    Current message level: 0x00000000 (0)
                  
    Link detected: yes

If I try to put bond0 as bridge port I've the same problem:
Code:
#cat /etc/network/interfaces
...
auto vmbr2
iface vmbr2 inet manual
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0

# ifreload -a
warning: vmbr2: apply bridge ports settings: cmd '/bin/ip -force -batch - [link set dev bond0 master vmbr2
addr flush dev bond0]' failed: returned 1 (RTNETLINK answers: No data available
Command failed -:1
)

If I add bond0.2 to the interfaces (like you mentioned), and do ifup bond0.2, same problem:
Code:
#ifreload -a
warning: vmbr2: apply bridge ports settings: cmd '/bin/ip -force -batch - [link set dev bond0.2 master vmbr2]' failed: returned 1 (RTNETLINK answers: No data available
Command failed -:1
)
 
I think it's really a kernel bug or nic driver bug. I
never have done bond with different nic model before, but I think it should works.
sorry, I really can't help...

It are even the same NICs (Broadcom BCM57412 NetXtreme-E 10Gb). Just 2x dual port cards.

Code:
# dmesg|grep -E -i "bnx|ens"
[    4.118343] Broadcom NetXtreme-C/E driver bnxt_en v1.10.0
[    4.131885] bnxt_en 0000:5e:00.0 eth0: Broadcom BCM57412 NetXtreme-E 10Gb Ethernet found at mem b8a10000, node addr bc:97:e1:2b:11:60
[    4.131891] bnxt_en 0000:5e:00.0: 63.008 Gb/s available PCIe bandwidth (8 GT/s x8 link)
[    4.144098] bnxt_en 0000:5e:00.1 eth2: Broadcom BCM57412 NetXtreme-E 10Gb Ethernet found at mem b8a00000, node addr bc:97:e1:2b:11:61
[    4.144102] bnxt_en 0000:5e:00.1: 63.008 Gb/s available PCIe bandwidth (8 GT/s x8 link)
[    4.155625] bnxt_en 0000:5f:00.0 eth4: Broadcom BCM57412 NetXtreme-E 10Gb Ethernet found at mem b8d10000, node addr bc:97:e1:2b:48:70
[    4.155630] bnxt_en 0000:5f:00.0: 63.008 Gb/s available PCIe bandwidth (8 GT/s x8 link)
[    4.167669] bnxt_en 0000:5f:00.1 eth5: Broadcom BCM57412 NetXtreme-E 10Gb Ethernet found at mem b8d00000, node addr bc:97:e1:2b:48:71
[    4.167675] bnxt_en 0000:5f:00.1: 63.008 Gb/s available PCIe bandwidth (8 GT/s x8 link)
[    4.169639] bnxt_en 0000:5e:00.0 ens3f0np0: renamed from eth0
[    4.204901] bnxt_en 0000:5e:00.1 ens3f1np1: renamed from eth2
[    4.245169] bnxt_en 0000:5f:00.0 ens2f0np0: renamed from eth4
[    4.264865] bnxt_en 0000:5f:00.1 ens2f1np1: renamed from eth5
[   15.066130] bnxt_en 0000:5e:00.0 ens3f0np0: NIC Link is Up, 10000 Mbps full duplex, Flow control: ON - receive & transmit
[   15.066134] bnxt_en 0000:5e:00.0 ens3f0np0: FEC autoneg off encodings: None
[   15.067939] bond1: (slave ens3f0np0): Enslaving as a backup interface with an up link
[   15.354726] bnxt_en 0000:5f:00.0 ens2f0np0: NIC Link is Up, 10000 Mbps full duplex, Flow control: ON - receive & transmit
[   15.354729] bnxt_en 0000:5f:00.0 ens2f0np0: FEC autoneg off encodings: None
[   15.356373] bond1: (slave ens2f0np0): Enslaving as a backup interface with an up link
[   16.379092] bnxt_en 0000:5e:00.1 ens3f1np1: NIC Link is Up, 10000 Mbps full duplex, Flow control: ON - receive & transmit
[   16.379096] bnxt_en 0000:5e:00.1 ens3f1np1: FEC autoneg off encodings: None
[   16.380800] bond0: (slave ens3f1np1): Enslaving as a backup interface with an up link
[   16.655201] bnxt_en 0000:5f:00.1 ens2f1np1: NIC Link is Up, 10000 Mbps full duplex, Flow control: ON - receive & transmit
[   16.655205] bnxt_en 0000:5f:00.1 ens2f1np1: FEC autoneg off encodings: None
[   16.656949] bond0: (slave ens2f1np1): Enslaving as a backup interface with an up link
 
hello,

did you find a solution to this problem? im having exactly same issue,
im running the BCM57416 model,

bond with 2 ports on same card works,
bond with 2 ports split onto 2 different cards dosnt - no data available


br
 
hello,

did you find a solution to this problem? im having exactly same issue,
im running the BCM57416 model,

bond with 2 ports on same card works,
bond with 2 ports split onto 2 different cards dosnt - no data available


br

can you share your /etc/network/interfaces ?
and "cat /proc/net/bonding/bondX"

is it BCM57416 for the 2 cards ?
 
Code:
auto lo
iface lo inet loopback

allow-hotplug eth2
iface eth2 inet manual

allow-hotplug eth4
iface eth4 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves eth4 eth2
        bond-miimon 100
        bond-mode active-backup
        bond-updelay 500

auto vmbr0
iface vmbr0 inet static
        address  10.X.X.22
        netmask  255.255.255.0
        gateway 10.X.X.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0

Code:
[    6.568741] bnxt_en 0000:19:00.0 eth0: Broadcom BCM57416 NetXtreme-E 10GBase-T Ethernet found at mem 9dd10000, node addr b0:26:28:b1:b4:b2
[    6.568748] bnxt_en 0000:19:00.0: 63.008 Gb/s available PCIe bandwidth (8 GT/s x8 link)
[    6.731474] bnxt_en 0000:19:00.1 eth2: Broadcom BCM57416 NetXtreme-E 10GBase-T Ethernet found at mem 9dd00000, node addr b0:26:28:b1:b4:b3
[    6.731479] bnxt_en 0000:19:00.1: 63.008 Gb/s available PCIe bandwidth (8 GT/s x8 link)
[    6.743259] bnxt_en 0000:5e:00.0 eth4: Broadcom BCM57416 NetXtreme-E 10GBase-T Ethernet found at mem b8a10000, node addr b0:26:28:6b:f5:f0
[    6.743263] bnxt_en 0000:5e:00.0: 63.008 Gb/s available PCIe bandwidth (8 GT/s x8 link)
[    6.781598] bnxt_en 0000:5e:00.1 eth3: Broadcom BCM57416 NetXtreme-E 10GBase-T Ethernet found at mem b8a00000, node addr b0:26:28:6b:f5:f1
[    6.781603] bnxt_en 0000:5e:00.1: 63.008 Gb/s available PCIe bandwidth (8 GT/s x8 link)


running ifup -a results in:


1596219311639.png



btw if i start one slave first and then the other one it works.
 
changing allow-hotplug to auto didnt help, same error
i have tried without updeleay - no change

strange thing is that the issue happens only when using 2 different network cards, 2 ports on the same card works just fine.
 
simple workaround for my issue:

actually it didnt help, starting vm caused the issue to come back,
to fix it had to reconnect network ports to the same card
 
Last edited:
Just wondering: why are you trying to create bond 0.2?
Why not instead, create another vmbr, add the VLAN-Tag 2 there and bind the bond0 to it?
Or even better (if you don't need an IP-addresse on this bridge) connect vmbr2 to your VMs like usual and set the VLAN-Tag in the VM and CT settings.
 
Last edited:
did some tests, active-backup bond with 2 ports:

a) bond/bridge across 2 broadcom network cards - error
b) bond/bridge on single dual port broadcom network card - works
b) bond/bridge across 2 intel network cards - works
c) bond/bridge across broadcom and qlogic network cards - works

it would require some additional testing, as i have different switches on the other end but it does look like an issue with broadcom
 
  • Like
Reactions: rlljorge

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!