No Networking After Upgrade to 8.2

jt-socal

Member
Feb 22, 2020
8
0
21
I upgraded to 8.2 and now have no networking connections. I still have the same IP, but cannot ping out from the machine to the network or any VM. I am also unable to ping the host. Obviously cannot connect to web interface or reach any VM. Any suggestions please?

I tried disabling pve-firewall and proxmox-firewall services and rebooting, did not help. I was not using the firewall previously.

I rebooted into the prior kernel and networking is working again. Any suggestions on debugging the kernel upgrade?
root@pve:~# lspci | grep -i eth
67:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
67:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
67:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
67:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
b7:00.0 Ethernet controller: Intel Corporation Ethernet Connection X722 for 10GBASE-T (rev 04)
b7:00.1 Ethernet controller: Intel Corporation Ethernet Connection X722 for 10GBASE-T (rev 04)
b7:00.2 Ethernet controller: Intel Corporation Ethernet Connection X722 for 10GbE SFP+ (rev 04)
b7:00.3 Ethernet controller: Intel Corporation Ethernet Connection X722 for 10GbE SFP+ (rev 04)
 
Last edited:
How does your current network configuration look like?

Code:
ip a
cat /etc/network/interfaces

Can you attach the output of the journal since the upgrade?

Code:
journalctl --since '2024-04-24' > journal.txt
 
  • Like
Reactions: jt-socal
Thanks, redacted so would not be displayed publically. Happy to provide directly.
 
Last edited:
It looks like the identifier of your network card changed with the new kernel causing configuring your networking to fail:

Code:
Apr 24 06:32:29 pve networking[1065]: error: vmbr0: bridge port eno8 does not exist
Apr 24 06:32:29 pve /usr/sbin/ifup[1065]: error: vmbr0: bridge port eno8 does not exist
Apr 24 06:32:29 pve networking[1065]: warning: vmbr0: apply bridge ports settings: bridge configuration failed (missing ports)
Apr 24 06:32:29 pve /usr/sbin/ifup[1065]: warning: vmbr0: apply bridge ports settings: bridge configuration failed (missing ports)
Apr 24 06:32:30 pve networking[1065]: error: >>> Full logs available in: /var/log/ifupdown2/network_config_ifupdown2_163_Apr-24-2024_06:32:29.579878 <<<
Apr 24 06:32:30 pve /usr/sbin/ifup[1065]: >>> Full logs available in: /var/log/ifupdown2/network_config_ifupdown2_163_Apr-24-2024_06:32:29.579878 <<<

You will need to adapt your network configuration to use the proper identifier for the new kernel. Sadly this can happen with new kernel upgrades. Try to use the more stable enp183s0f3 'Predictable Network Interface Names' format. The probability of this happening is lower when using those names (albeit still not zero..)
 
  • Like
Reactions: jt-socal
Thanks, deleted prior post with info I would prefer not be public. Can provide directly if needed. I'll test out new names and new kernels this weekend. Thanks again.
 
Thanks, so, like this?
1. boot into new kernel
2. ip a
3. cd /etc/network
4. cp interfaces interfaces.backup.just.in.case
5. nano -w interfaces
a. replace interface names without output of ip a, save and exit
6. service networking restart
 
Also had this happen. Really pissed me off. Some warning in the upgrade notes would have been nice—I could have prepared for it.

edit - there is a caveat in the upgrade notes. i'm dumb.
 
Last edited:
In my case the built-in network in my Asus PN50 kept working but my secondary NIC (a USB running ASIX AX88179) stopped working. I assumed either that cheapo nic finally died or we lost the driver with the update somehow. I swapped it for another USB nic I had lying around (after removing the bridge port over it, and deleting the non-working nic) and that set up just fine. Reading this thread now makes me think I can reverse the process and go back to the one I had.

My main Proxmox machine that has Intel NICs (enp and ens) did not have issues.
 
Is there any way to determine/predict the interface name that will be used after the reboot to the new kernel?
 
  • Like
Reactions: jebbam
Hello,

The igb driver of the 6.x kernels seems more peeky about the nic nvm and refuse to enable it.

Maybe are affected by the same issue I have (see my Thread 'PVE8 Intel I350-T4 NIC driver issues' https://forum.proxmox.com/threads/pve8-intel-i350-t4-nic-driver-issues.145422/).

Check if the kernel created the interface for your I350 nic:

# ip link

Also check what the kernel thought of the I350 nic during boot:

# dmesg | grep igb

See what I posted.

Hope you do not have my issue.

Max
 
Also had this happen. Really pissed me off. Some warning in the upgrade notes would have been nice—I could have prepared for it.

edit - there is a caveat in the upgrade notes. i'm dumb.
Nah. You are not the only one who missed that.

Also, the linked docs from that warning say that one could "[...] pin the version v252, which is the latest naming scheme version for a fresh Proxmox VE 8.0 installation [...]" which made me wonder why I didn't encounter the interface naming change when I upgraded to PVE 8.0 but did encounter the issue today. Running `udevadm test-builtin net_id /sys/class/net/eno1` on my remaining PVE 8.1 node reports "Using default interface naming scheme 'v252'."

Running `udevadm test-builtin net_id /sys/class/net/eno1np0` (the new name) on one of my PVE 8.2 nodes reports the same "Using default interface naming scheme 'v252'."

I think I found the answer in the SYSTEMD-UDEVD.SERVICE(8) manual page:
Note that selecting a specific scheme is not sufficient to fully stabilize interface naming: the naming is generally derived from driver attributes exposed by the kernel. As the kernel is updated, previously missing attributes systemd-udevd.service is checking might appear, which affects older name derivation algorithms, too.

So I have decided to follow the documentation for overriding network device names and use MAC addresses to set custom interface names.

Predictable Network Interface Names is a bit of a misnomer once you start digging in. But to be fair, it is a tough nut to crack.

I hope that helps.
 
Last edited:
Also had this happen. Really pissed me off. Some warning in the upgrade notes would have been nice—I could have prepared for it.

edit - there is a caveat in the upgrade notes. i'm dumb.
When I upgraded my PVE 7 to 8 cluster, I read and re-read the instructions over several days. I've never felt the need to do this for any sub-version (7.x, for example) upgrade. I'll now be reading upgrade notes. I'm grateful to have found this thread, but I'm definitely frustrated.
 
  • Like
Reactions: jebbam
All, I’d like to share some recent changes I’ve made to the /etc/network/interfaces file post the upgrade.

These adjustments have proven useful in our proxmox environment, and I hope they may benefit others as well. Before proceeding, please ensure you create a backup of the original file. cp /etc/network/interfaces /etc/network/interfaces.old

Network Hardware Configuration:
  • We are utilising four 10GbE Intel network interfaces, which have been configured into two LACP bonded pairs.
  • The first bond is dedicated to Proxmox networking 9000 MTU, while the second bond handles VM data traffic 1500 MTU.

Best Practice: Capture Output for Comparison:
  • When troubleshooting network issues or assessing server health, consider capturing screenshots from the output of the following shell command:
    ip a | more

    This provides a comprehensive view of network interfaces, IP addresses, and related details.
  • I recommend comparing this output between the current incident server and older, non-patched servers. Identifying discrepancies can help pinpoint configuration issues or potential vulnerabilities.

High Level Process:
  • Console Login to the Server
  • Backup the Interfaces File
  • Edit the Interfaces File
  • Confirm Correctness of File
  • Reboot the Proxmox Server
  • Network Login to the Server
  • Confirm operations for all connected interfaces and bonds

< = original interfaces file
> = updated interfaces file
additions highlighted in bold red


< auto eno1
< iface eno1 inet manual

---
> auto eno1np0
> iface eno1np0 inet manual


< auto eno2
< iface eno2 inet manual

---
> auto eno2np1
> iface eno2np1 inet manual


< auto ens1f0
< iface ens1f0 inet manual

---
> auto ens1f0np0
> iface ens1f0np0 inet manual


< auto ens1f1
< iface ens1f1 inet manual

---
> auto ens1f1np1
> iface ens1f1np1 inet manual


< bond-slaves eno1 ens1f0
---
> bond-slaves eno1np0 ens1f0np0


< bond-slaves eno2 ens1f1
---
> bond-slaves eno2np1 ens1f1np1
 
Last edited:
Debian 12 + Proxmox. First installed Proxmox 8.1, now upgraded to 8.2 and have no network.

What I discovered:
- Interface name is same: eno1 (as displayed in ip a output)
- The bridge and its vlan interface described in /etc/network/interfaces is configured correctly: displayed in ip a output and have address
- No network traffic goes in and out
- Ping to local addresses (192.168.1.100 and 127.0.0.1) working fine
- If I disable bridge and add vlan interface with ip address on eno1, networking is being restoring

Supermicro X11DPX-T with Intel X550 LAN (ixgbe driver).

Tryed to:
- Change interface name to eno1np0
- Boot with net.naming-scheme=v252
- Boot with net.ifnames=1 biosdevname=0

Network config:

Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto vmbr0
iface vmbr0 inet manual
       bridge-ports eno1
       bridge-stp off
       bridge-fd 0
       bridge_hello 2
       bridge_maxage 12

auto vmbr0.91
iface vmbr0.91 inet static
       address 192.168.1.100/24
       gateway 192.168.1.1

Please help.
 
Last edited:
I have same issue. Just upgraded to 8.2 from 8.1and my additional USB nics disappeared. Good my primary network cards are fine but I use them as management and replication between nodes. these gone usb cards were assigned for services access.

Is it due to removal driver from kernel support and will it be restored in future? I can live for a time being, but I’d like to know if I need to search for new NICs for my dell micro nodes.

EDIT: It seems my usb AX88179 nics were renamed as well, but it took me multiple reboots to get them listed inside GUI. I modified vmbrx interface and assign correct name. Boy, I hope this is not going to happen again.
 
Last edited:
All, I’d like to share some recent changes I’ve made to the /etc/network/interfaces file post the upgrade.

These adjustments have proven useful in our proxmox environment, and I hope they may benefit others as well. Before proceeding, please ensure you create a backup of the original file. cp /etc/network/interfaces /etc/network/interfaces.old

Network Hardware Configuration:
  • We are utilising four 10GbE Intel network interfaces, which have been configured into two LACP bonded pairs.
  • The first bond is dedicated to Proxmox networking 9000 MTU, while the second bond handles VM data traffic 1500 MTU.

Best Practice: Capture Output for Comparison:
  • When troubleshooting network issues or assessing server health, consider capturing screenshots from the output of the following shell command:
    ip a | more

    This provides a comprehensive view of network interfaces, IP addresses, and related details.
  • I recommend comparing this output between the current incident server and older, non-patched servers. Identifying discrepancies can help pinpoint configuration issues or potential vulnerabilities.

High Level Process:
  • Console Login to the Server
  • Backup the Interfaces File
  • Edit the Interfaces File
  • Confirm Correctness of File
  • Reboot the Proxmox Server
  • Network Login to the Server
  • Confirm operations for all connected interfaces and bonds

< = original interfaces file
> = updated interfaces file
additions highlighted in bold red


< auto eno1
< iface eno1 inet manual

---
> auto eno1np0
> iface eno1np0 inet manual


< auto eno2
< iface eno2 inet manual

---
> auto eno2np1
> iface eno2np1 inet manual


< auto ens1f0
< iface ens1f0 inet manual

---
> auto ens1f0np0
> iface ens1f0np0 inet manual


< auto ens1f1
< iface ens1f1 inet manual

---
> auto ens1f1np1
> iface ens1f1np1 inet manual


< bond-slaves eno1 ens1f0
---
> bond-slaves eno1np0 ens1f0np0


< bond-slaves eno2 ens1f1
---
> bond-slaves eno2np1 ens1f1np1
Thank you for this. Saved me a lot of time and guessing.
 
  • Like
Reactions: Abend1

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!