Bond doesn't failover to the other slave (OVH / Proxmox 7.1-12)

STFC1987

New Member
Jan 18, 2022
5
0
1
36
Hi all,

I've configured the following ethernet bond [bond0], which appears to be working when both [eth0] and [eth1] are online, however, when I take down [eth1], IPv6 stops pinging for the node but not in the VMs. When I put [eth1] back online and take down [eth0], IPv6 is up for the node but not for the VMs and IPv4 stops pinging and all the VMs lose network connectivity. I have ethernet bonding working with OVH on another server (same hardware specification) in the same data centre that is using CloudLinux so I know IEEE 802.3ad works. However, for some reason on this server, the bond doesn't failover to the other slave when I take down one of the slaves.

cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v5.13.19-6-pve

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: xx:xx:xx:xx:xx:xx
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 21
Partner Key: 242
Partner Mac Address: xx:xx:xx:xx:xx:xx

Slave Interface: eth0
MII Status: up
Speed: 25000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: xx:xx:xx:xx:xx:xx
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: xx:xx:xx:xx:xx:xx
port key: 21
port priority: 255
port number: 1
port state: 61
details partner lacp pdu:
system priority: 32768
system mac address: xx:xx:xx:xx:xx:xx
oper key: 242
port priority: 1000
port number: 80
port state: 61

Slave Interface: eth1
MII Status: up
Speed: 25000 Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr: xx:xx:xx:xx:xx:xx
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: xx:xx:xx:xx:xx:xx
port key: 21
port priority: 255
port number: 2
port state: 61
details partner lacp pdu:
system priority: 32768
system mac address: xx:xx:xx:xx:xx:xx
oper key: 242
port priority: 2000
port number: 32848
port state: 61

cat /etc/network/interfaces

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto eth1
iface eth1 inet manual

auto bond0
iface bond0 inet manual
bond-slaves eth0 eth1
bond-miimon 100
bond-mode 802.3ad

auto vmbr0
iface vmbr0 inet static
address xx.xxx.xxx.xx/24
gateway xxx.xx.x.x
bridge-ports bond0
bridge-stp off
bridge-fd 0
up ip route add xxx.xxx.xxx.x/25 dev vmbr0
down ip route del xxx.xxx.xxx.x/25 dev vmbr0

iface vmbr0 inet6 static
address xxxx:xxxx:xxx:xxxx::/56
gateway fe80::1
up ip -6 route add fe80::1 dev vmbr0
up ip -6 route add default via fe80::1 dev vmbr0
down ip -6 route del default via fe80::1 dev vmbr0
down ip -6 route del fe80::1 dev vmbr0

Any suggestions?

Thank you.

Kind regards,

Jeffrey
 
I'm still having no luck with this. Has anybody experienced something similar or have any suggestions?
 
Thanks, @Pierre-Yves, however, it doesn't seem to have made any difference.
Do you have anything special configured under the ethernet interfaces?

We just have the following -

auto eth0
iface eth0 inet manual
auto eth1
iface eth1 inet manual
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!