[SOLVED] Upgrade to Proxmox 7 - Bond (LACP) Interface not working anymore

houbidoo

Renowned Member
Mar 16, 2015
13
1
68
Hello,

Yesterday I upgraded 2 identical servers (Supermicro with Intel i350 NICs onboard) to Proxmox 7.0. The Servers are connected via 2* bond interfaces to a Juniper Switch with ae Interfaces and LACP configure). A third server with different hardware was upgraded, too.

One Server is working like normal, on the second ones the bond Interfaces do not work anymore and on the Juniper Switch I can see that the Server does not send any LACP PDUs. The third server has the same behaivour..

On server physical interfaces are up, bond Interfaces are status down.
Bridge 0 is down (IPv4 configured), Bridge 1 is up (no IP to proxmox host, only used bei VMs)

Any idea?

Config Juniper Switch:
-->> to working Proxmox Server 1

set interfaces ge-0/0/14 ether-options 802.3ad ae7
set interfaces ge-1/0/14 ether-options 802.3ad ae7
set interfaces ae7 description Host01_Kunden_Management
set interfaces ae7 aggregated-ether-options lacp active
set interfaces ae7 unit 0 description Host01_Kunden_Management
set interfaces ae7 unit 0 family ethernet-switching vlan members VLAN700

set interfaces ge-0/0/14 description Host01_Kunden_Management
set interfaces ge-0/0/14 ether-options 802.3ad ae7
set interfaces ge-1/0/14 description Host01_Kunden_Management
set interfaces ge-1/0/14 ether-options 802.3ad ae7
.....second LACP is configured the same way


Config Juniper Switch:
-->> to NOT working Proxmox Server 2

set interfaces ge-0/0/12 ether-options 802.3ad ae4
set interfaces ge-1/0/12 ether-options 802.3ad ae4
set interfaces ae4 description Host02_Kunden_Management
set interfaces ae4 aggregated-ether-options lacp active
set interfaces ae4 unit 0 description Host02_Kunden_Management
set interfaces ae4 unit 0 family ethernet-switching vlan members VLAN700

set interfaces ge-0/0/12 description Host02_Kunden_Management
set interfaces ge-0/0/12 ether-options 802.3ad ae4
set interfaces ge-1/0/12 description Host02_Kunden_Management
set interfaces ge-1/0/12 ether-options 802.3ad ae4


Network Config working Proxmox Server 1
auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

iface eno3 inet manual

iface eno4 inet manual

auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
#MGMT LACP

auto bond1
iface bond1 inet manual
bond-slaves eno3 eno4
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
#VM Netzwerk LACP

auto vmbr0
iface vmbr0 inet static
address 10.240.0.201/24
gateway 10.240.0.254
bridge-ports bond0
bridge-stp off
bridge-fd 0
#MGMT IP

auto vmbr1
iface vmbr1 inet manual
bridge-ports bond1
bridge-stp off
bridge-fd 0
#VM Netzwerk IP


Network Config working Proxmox Server 2
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto eno2
iface eno2 inet manual

auto eno3
iface eno3 inet manual

auto eno4
iface eno4 inet manual

auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
#LACP MGMT

auto bond1
iface bond1 inet manual
bond-slaves eno3 eno4
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
#LACP VM Netzwerk

auto vmbr0
iface vmbr0 inet static
address 10.240.0.202/24
gateway 10.240.0.254
bridge-ports bond0
bridge-stp off
bridge-fd 0
#Management

auto vmbr1
iface vmbr1 inet manual
bridge-ports bond1
bridge-stp off
bridge-fd 0
#VM Netzwerk


LACP Statistices on Juniper switch to NOT working Proxmox Interfaces (no LACP PDUs received from Proxmox host)
show lacp statistics interfaces ae4
Aggregated interface: ae4
LACP Statistics: LACP Rx LACP Tx Unknown Rx Illegal Rx
ge-0/0/12 0 3035 0 0
ge-1/0/12 0 3006 0 0

show lacp statistics interfaces ae5
Aggregated interface: ae5
LACP Statistics: LACP Rx LACP Tx Unknown Rx Illegal Rx
ge-0/0/13 0 3035 0 0
ge-1/0/13 0 3013 0 0

show lacp statistics interfaces ae6
Aggregated interface: ae6
LACP Statistics: LACP Rx LACP Tx Unknown Rx Illegal Rx
ge-0/0/11 0 3351 0 0
ge-1/0/11 0 3323 0 0
 
Last edited:
Did find the solution:

In the /etc/network/interfaces the following statement appears on the servers that were not working anymore after upgrading to 7.0

auto eno1
iface eno1 inet manual

auto eno2
iface eno2 inet manual

auto eno3
iface eno3 inet manual

auto eno4
iface eno4 inet manual

After deleting them it worked again. Checked it on both not working servers...
 
  • Like
Reactions: mickaelbeaudry
Confirm - had the same problem and resolved it by your recommendation.
The issue is that even fresh install creates those lines, effectively rendering the machine no network access.
 
Would be interesting to find the root cause for this behavior, because here it also works with the auto statements.
 
After I upgraded my cluster from 6 to 7 I did not have any network connectivity. what ended up fixing the issues was logging into each node and running the following command

systemctl enable networking; reboot

After that, the node rebooted and came back online and everything was working again. Not sure why the networking stack got disabled during the update. This is on a 5 node cluster with multiple NICs with and without bonds that was using ifupdown2 before the upgrade.
 
After I upgraded my cluster from 6 to 7 I did not have any network connectivity. what ended up fixing the issues was logging into each node and running the following command

systemctl enable networking; reboot

After that, the node rebooted and came back online and everything was working again. Not sure why the networking stack got disabled during the update. This is on a 5 node cluster with multiple NICs with and without bonds that was using ifupdown2 before the upgrade.

I removed the ifupdown package leftover configs, after reboot no network connectivity, all network interfaces are down.
After "networking" service enable and reboot, all network interfaces are up.
Maybe "networking" service not enabled by default and ifupdown2 don't working without this?
 
Last edited:
Same Problem here after upgrade 6 to 7 (3x Bonds LCAP 803.ad). No ifupdown2 installed, but net-tools (ifconfig).

But I have no luck with:
Code:
systemctl enable networking; reboot

Disable auto ensxx fixed the issue, so i have to investigate...
 
Last edited:
Same Problem here after upgrade 6 to 7 (3x Bonds LCAP 803.ad). No ifupdown2 installed, but net-tools (ifconfig).

But I have no luck with:
Code:
systemctl enable networking; reboot

Disable auto ensxx fixed the issue, so i have to investigate...
After installing ifupdown2 everything works fine.
 
I've been reading some differing posts on this topic and got to admit, I'm a little confuzzled.

Fresh install of Promox Backup 2.0 (the install GUI works finally!) on a HP Microserver with Startech Dual GBit PCIe card. Microserver comes with an onboard NIC and the Startech Dual GBit NIC supports 802.3ad LACP. I've configured an etherchannel group with 2 interfaces on a Cisco 3560, mode "active" (which is LACP) in a trunk. I know the Cisco config is good as I have other Proxmox VE 6.4 servers running with the same Startech Dual card on other etherchannels on the same switch.

here's my /etc/network/interfaces config

Code:
auto lo
iface lo inet loopback

auto enp6s0
iface enp6s0 inet static
    address 10.100.1.14/24
    gateway 10.100.1.254
#ONBOARD NIC

iface enp4s0 inet manual
#Startech Card NIC1

iface enp5s0 inet manual
#Startech Card NIC2

auto bond0
iface bond0 inet manual
#NIC BOND0
    bond-mode 802.3ad
    bond_xmit_hash_policy layer2+3
    bond-slaves enp4s0 enp5s0

iface vmbr1 inet manual
#vBridge to BOND0
    bridge-vlan-aware yes
    bridge-ports bond0

auto vmbr110
iface vmbr110 inet static
#ProxBackup VLAN110
    address 10.110.1.14/27
    bridge-ports vmbr1.110

auto vmbr168
iface vmbr168 inet static
#Internal VLAN168
    address 192.168.120.123/24
    bridge-ports vmbr1.168

The onboard NIC works fine. The VLAN's on BOND0 aren't working at all. A "show etherchannel summary" command on the 3560 is showing the bonded NIC's working fine.

Any suggestions would be awesome. TIA!
 
I've been reading some differing posts on this topic and got to admit, I'm a little confuzzled.

Fresh install of Promox Backup 2.0 (the install GUI works finally!) on a HP Microserver with Startech Dual GBit PCIe card. Microserver comes with an onboard NIC and the Startech Dual GBit NIC supports 802.3ad LACP. I've configured an etherchannel group with 2 interfaces on a Cisco 3560, mode "active" (which is LACP) in a trunk. I know the Cisco config is good as I have other Proxmox VE 6.4 servers running with the same Startech Dual card on other etherchannels on the same switch.

here's my /etc/network/interfaces config

Code:
auto lo
iface lo inet loopback

auto enp6s0
iface enp6s0 inet static
    address 10.100.1.14/24
    gateway 10.100.1.254
#ONBOARD NIC

iface enp4s0 inet manual
#Startech Card NIC1

iface enp5s0 inet manual
#Startech Card NIC2

auto bond0
iface bond0 inet manual
#NIC BOND0
    bond-mode 802.3ad
    bond_xmit_hash_policy layer2+3
    bond-slaves enp4s0 enp5s0

iface vmbr1 inet manual
#vBridge to BOND0
    bridge-vlan-aware yes
    bridge-ports bond0

auto vmbr110
iface vmbr110 inet static
#ProxBackup VLAN110
    address 10.110.1.14/27
    bridge-ports vmbr1.110

auto vmbr168
iface vmbr168 inet static
#Internal VLAN168
    address 192.168.120.123/24
    bridge-ports vmbr1.168

The onboard NIC works fine. The VLAN's on BOND0 aren't working at all. A "show etherchannel summary" command on the 3560 is showing the bonded NIC's working fine.

Any suggestions would be awesome. TIA!
Maybe try adding

iface vmbr1 inet manual
#vBridge to BOND0
bridge-ports bond0
bridge-vlan-aware yes
bridge-vids 2-4094
 
Hi vesalius and everyone
I ended up reverting back to Debian Buster and Proxmox Backup v1.1. Once the Proxmox GUI has matured regarding Network config and bonding, i'll consider an upgrade. PVE6.4 does everything I need it to
 
Hi vesalius and everyone
I ended up reverting back to Debian Buster and Proxmox Backup v1.1. Once the Proxmox GUI has matured regarding Network config and bonding, i'll consider an upgrade. PVE6.4 does everything I need it to
The correct configuration with vlan-aware bridge:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_vlan_802_1q

Code:
iface vmbr1 inet manual
#vBridge to BOND0
    bridge-vlan-aware yes
    bridge-ports bond0

auto vmbr1.100
iface vmbr1.100 inet static
#ProxBackup VLAN110
    address 10.110.1.14/27


auto vmbr1.168
iface vmbr1.168 inet static
#Internal VLAN168
    address 192.168.120.123/24
 
Last edited:
  • Like
Reactions: vesalius
Spirit,
Sorry for the delayed reply. Thank you for clarifying the resolution. I've recently installed a 3rd PVE 6.4 server and setup replication and HA in the cluster. I've got a couple of older desktops I'd like to use to spin up a PVE 7.x box and will report back with the interfaces config for bonded NIC's and VLANs once I've done that.
 
  • Like
Reactions: spirit
First of all thank you. I had to remove the: auto eno1 then restarted networking and it worked

Then I re-installed ifupdown2 got into gui reset the settings and it made it how it was originally but it now works with the auto eno1 lines. Is this because ifupdown2 was removed on the upgrade?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!