Unreacheable new proxmox with bond configuration

bbe

New Member
Jan 26, 2021
10
1
3
35
Hi,

I have several proxmox and have installed 2 new servers in debian 10.7 recently with pve 6.3.3

When I start the server I can't reach it. The configuration is however similar to my other servers.

/etc/network/interfaces:

auto bond0
iface bond0 inet manual
bond-slaves eth0 eth2
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer3+4
bond-lacp-rate 1
bond-min-links 1

auto bond1
iface bond1 inet static
address 10.90.11.177/24
bond-slaves eth1 eth3
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer3+4
bond-lacp-rate 1
bond-min-links 1

auto vmbr0
iface vmbr0 inet static
address 10.20.2.131/24
gateway 10.20.2.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094

bond0 and bond1 is UP just like vmbr0 and other interfaces. However the servers do not join their gateway. (It's ok for bond1)

if I delete the vmbr0 and recreate it at that moment the connectivity works properly. But if I restart the server is unreachable again.
works only when I delete the vmbr0 and re-create it manually:

sudo ip link set down vmbr0
sudo brctl delbr vmbr0
sudo brctl addbr vmbr0
sudo brctl addif vmbr0 bond0
sudo ip addr add 10.20.2.131/24 dev vmbr0
sudo ip link set up vmbr0
sudo route add default gw 10.20.2.1 vmbr0

Do you have a idea ?
Best regards
Benjamin
 
after re-testing, when I remove the line:
"bridge-vlan-aware yes" the server starts and I can connect to it.
On the switch side the configuration is identical to my other proxmox for which it works (1 untagged vlan for the management of the proxmox and then the vlan for the tagged vlan).
 
We have on the servers 2 cards HPE Eth 10Gb 2p 537SFP+ Adptr
Each board is composed of 2 10G SFP+ fiber interfaces
Bond0 = interface1 board1 + interface2 board1 (Data Proxmox)
Bond1 = interface1 board2 + interface2 board2 (SAN)

The bond1 (SAN) ports always mount correctly.
 
We use switches under CumulusLinux (in MLAG).
below the configuration of the bond interface:

interface prx-8-cdc
bond-slaves swp31
bridge-pvid 203
bridge-vids 100 201 203-220 222-228 231 238 242 244-248 250 270 273-274 293 295-296 304-306 399 403-412 414
clag-id 21
mtu 9000
 
Yes, I'm sure about this.
This is the default configuration: mode 802.3ad (https://docs.cumulusnetworks.com/cumulus-linux-37/Layer-2/Bonding-Link-Aggregation/)

I use the same configuration for bond1:

Switch side:
interface prx-8-cdc-san
bond-slaves swp32
bridge-access 911
clag-id 22
mtu 9000

Server side:

auto bond1
iface bond1 inet static
address 10.90.11.177/24
bond-slaves eth1 eth3
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer3+4
bond-lacp-rate 1
bond-min-links 1
 
Before activate bride-vlan-aware-bridge (sudo ip link set vmbr0 type bridge vlan_filtering 0):

UP prx-8-cdc 10G 9000 802.3ad Master: bridge(UP)
prx-8-cdc Bond Members: swp31(UP)

Then, if I active the vlan-aware (sudo ip link set vmbr0 type bridge vlan_filtering 1):

DN prx-8-cdc N/A 9000 802.3ad Master: bridge(UP)
prx-8-cdc Bond Members: swp31(UP)


I can try active-backup but that's not what I'm trying to set up.
 
The switch link is always UP but the first column indicates "state" going DOWN

Server side all seems ok:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether f4:03:43:d5:8a:90 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
link/ether f4:03:43:d5:8a:98 brd ff:ff:ff:ff:ff:ff
4: eth4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d4:f5:ef:32:90:e0 brd ff:ff:ff:ff:ff:ff
5: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether f4:03:43:d5:8a:90 brd ff:ff:ff:ff:ff:ff
6: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
link/ether f4:03:43:d5:8a:98 brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d4:f5:ef:32:90:e1 brd ff:ff:ff:ff:ff:ff
8: eth6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d4:f5:ef:32:90:e2 brd ff:ff:ff:ff:ff:ff
9: eth7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d4:f5:ef:32:90:e3 brd ff:ff:ff:ff:ff:ff
10: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether f4:03:43:d5:8a:90 brd ff:ff:ff:ff:ff:ff
11: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether f4:03:43:d5:8a:98 brd ff:ff:ff:ff:ff:ff
inet 10.90.11.177/24 brd 10.90.11.255 scope global bond1
valid_lft forever preferred_lft forever
13: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether f4:03:43:d5:8a:90 brd ff:ff:ff:ff:ff:ff
inet 10.20.2.131/24 scope global vmbr0
valid_lft forever preferred_lft forever
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!