Network Issues with Proxmox7 -> Proxmox8

adamb

Famous Member
Mar 1, 2012
1,323
73
113
Pulling my hair out on a front end that has been through Proxmox4 -> Proxmox5 -> Proxmox6 -> Proxmox7 -> Proxmox8 upgrades.

This front end was using the old style NIC naming scheme of eth0, eth1, eth2 etc.

Typically not a huge deal.

- Move the /etc/udev/rules.d/ file out of the way and reboot
- System comes back up with persistent network names on the NIC
- Use ls /sys/class/net to determine the new network NIC names
- Adjust /etc/network/interfaces and reboot

I have done this 10's maybe even 100's of times on front ends over the years with no issues.

However, Proxmox7 -> Proxmox8 on these older front ends is not playing as expected. I am aware of the notes in the proxmox7 -> proxmox8 upgrade wiki with interfaces re-naming etc.

This doesn't seem to be that simple.

root@supprox3:~# ls /sys/class/net/
bond0 bond1 bonding_masters enp4s0f0 enp4s0f1 enp5s0 enp6s0 ens5f0 ens5f1 lo vmbr0

The only way I can get vmbr0, bond0 and bond1 to come up is with the following /etc/network/interfaces file.

root@supprox3:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto bond0
iface bond0 inet static
address 10.211.45.3/24
bond-slaves enp5s0 enp6s0
bond-miimon 100
bond-mode active-backup

auto bond1
iface bond1 inet manual
bond-slaves enp4s0f0 enp4s0f1
bond-miimon 100
bond-mode active-backup
bond-primary enp4s0f0
mtu 9000

auto vmbr0
iface vmbr0 inet static
address 10.80.16.156/16
gateway 10.80.1.5
bridge-ports ens5f0
bridge-stp off
bridge-fd 0


Both bond0 and bond1 look good and are up. Lets just look at bond1 for now.

root@supprox3:~# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v6.5.11-8-pve

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: enp4s0f0 (primary_reselect always)
Currently Active Slave: enp4s0f0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: enp4s0f0
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:11:0a:68:5c:94
Slave queue ID: 0

Slave Interface: enp4s0f1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:11:0a:68:5c:95
Slave queue ID: 0

I want bond1 to be part of vmbr1 like below.

auto vmbr1
iface vmbr1 inet static
address 10.210.45.98/24
bridge-ports bond1
bridge-stp off
bridge-fd 0
mtu 9000

Interface file now looks like this.

root@supprox3:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto bond0
iface bond0 inet static
address 10.211.45.3/24
bond-slaves enp5s0 enp6s0
bond-miimon 100
bond-mode active-backup

auto bond1
iface bond1 inet manual
bond-slaves enp4s0f0 enp4s0f1
bond-miimon 100
bond-mode active-backup
bond-primary enp4s0f0
mtu 9000

auto vmbr0
iface vmbr0 inet static
address 10.80.16.156/16
gateway 10.80.1.5
bridge-ports ens5f0
bridge-stp off
bridge-fd 0

auto vmbr1
iface vmbr1 inet static
address 10.210.45.98/24
bridge-ports bond1
bridge-stp off
bridge-fd 0
mtu 9000

Reboot the host with vmbr1 added and then bond1 doesn't come up.

Bond1 looks like this after adding vmbr1.

root@supprox3:~# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v6.5.11-8-pve

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: enp4s0f0 (primary_reselect always)
Currently Active Slave: None
MII Status: down
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: enp4s0f0
MII Status: down
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:11:0a:68:5c:94
Slave queue ID: 0

Slave Interface: enp4s0f1
MII Status: down
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:11:0a:68:5c:95
Slave queue ID: 0


Anything I do in the GUI for networking is never applied etc. Ifupdown2 is installed etc.

If I remove vmbr1, bond1 comes up properly. Im at a loss.
 
Last edited:
Well I figured it out.

The following file was causing the issue with vmbr1 and all the weirdness.

/etc/network/if-up.d/vzifup-post

What lead me onto the issue was this line in the ifupdown2 logs.

2024-02-21 07:08:58,290: MainThread: ifupdown: scheduler.py:331:run_iface_list(): error: vmbr1 : bond1 : (enp4s0f1 : (enp4s0f1: up cmd '/etc/network/if-up.d/vzifup-post' failed: returned 127))

Moved vzifup-post out of the way and everything start behaving exactly as I expected. I am guessing this is remnants from being such a old system.
 
  • Like
Reactions: spirit
clean 8.0 content:

Code:
/etc/network/if-up.d# ls -lah
total 49K
drwxr-xr-x 2 root root    9 Feb 13 11:35 .
drwxr-xr-x 8 root root   14 Jan 12 14:59 ..
-rwxr-xr-x 1 root root  966 May 10  2020 bridgevlan
-rwxr-xr-x 1 root root  409 May 10  2020 bridgevlanport
-rwxr-xr-x 1 root root  145 May 13  2021 chrony
-rwxr-xr-x 1 root root 1.7K Dec 20  2022 ethtool
-rwxr-xr-x 1 root root 1.8K Sep 27  2016 ifenslave
-rwxr-xr-x 1 root root  236 May 10  2020 mtu
-rwxr-xr-x 1 root root 1.2K May  3  2023 postfix
 
  • Like
Reactions: adamb

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!