After upgrade to PVE 8.0 no network access

Bonus question - should the "main" cluster node be upgraded first??

Thx again!
 
Bonus question - should the "main" cluster node be upgraded first??

Thx again!
Proxmox VE uses corosync and for HA, a multi-master system, so there is no main node ;)
 
  • Like
Reactions: rfox
Proxmox VE uses corosync and for HA, a multi-master system, so there is no main node ;)

Funny - I have upgraded only one of four nodes so far - but the cluster webpage still shows 7.4 - does this change when all 4 are complete??

1687520578488.png
 
Funny - I have upgraded only one of four nodes so far - but the cluster webpage still shows 7.4 - does this change when all 4 are complete??

View attachment 52065
Did you reload the UI (clearing browser cache is also recommended and sometimes even necessary).
 
Did you reload the UI (clearing browser cache is also recommended and sometimes even necessary).
Yes - but oddly it changed after I reached the 3rd node for upgrade - odd but now all is well! All 4 nodes are now 8.03 :)
 
Same problem here, blocked on waiting to start network Service after upgrade to ver. 8.
I've resolved removing ntpdate package, hope helps someone...
 
I Fixed the issue below by running "apt install proxmox-ve" and then dpgk asked me to do something, i just clicked yes all the time and ran apt install proxmox-ve again and rebooted.

edit: i did not use pve7to8 because i was on proxmox 7.4 03 i think
i kinda have the same problem. i cant reach the web interface anymore after update. ( everything worked fine before VE8 but now its like this: My computer is connected via ethernet port but the ethernet port doesnt light up anymore.. so its not connected i guess).

when im booting normal proxmox as usual my screen shows the "Welcome to proxmox" screen for a split second and goes dark forever.
EDIT when booting normal: my screen went black because my win11 VM took over my GPU, so i do see now without recovery.
when im booting in recovery mode it doesn't go away but i still cant acess web interface.

ping 192.168.50.1 (gateway)
ping: connect: Network is unreachable



Code:
im writing this by hand:
sudo nano /etc/apt/sources.list =
deb http://ftp.de.debian.org/debian bookworm main contrib
deb http://ftp.de.debian.org/debian bookworm-updates main contrib
#security updates
deb http://security.debian.org bookworm-security main contrib
deb http://download.proxmox.com/debian/pve

sudo nano /etc/network/interfaces =

auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
    adress 192.168.50.157/24
    gateway 192.168.50.1
    bridge-ports eno1
    bridge-stp off
    bridge-fd 0

iface wlp3s0 inet manual

#zerotier
auto zt0
iface zt0 inet static
    adress 10.147.17.58/16
 
Last edited:
Same issue here. Right after upgrading I noticed that I kept seeing a pending task for apt-update.

During the upgrade I was prompted a few times about replacing config files. Maybe one of the files replaced caused the issue?
Everything seems to be working except that the pve host has no network access. Can't ping up update.

ip -a compared with network interfaces:

Code:
root@pve:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether e0:be:03:18:2a:68 brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
3: wlp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 44:af:28:7b:40:03 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e0:be:03:18:2a:68 brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.96/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::e2be:3ff:fe18:2a68/64 scope link
       valid_lft forever preferred_lft forever
5: vmbr0.2@vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e0:be:03:18:2a:68 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.96/24 scope global vmbr0.2
       valid_lft forever preferred_lft forever
    inet6 fe80::e2be:3ff:fe18:2a68/64 scope link
       valid_lft forever preferred_lft forever
6: vmbr0.5@vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e0:be:03:18:2a:68 brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.96/24 scope global vmbr0.5
       valid_lft forever preferred_lft forever
    inet6 fe80::e2be:3ff:fe18:2a68/64 scope link
       valid_lft forever preferred_lft forever
7: vmbr0.15@vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether e0:be:03:18:2a:68 brd ff:ff:ff:ff:ff:ff
    inet 172.16.15.96/24 scope global vmbr0.15
       valid_lft forever preferred_lft forever
    inet6 fe80::e2be:3ff:fe18:2a68/64 scope link
       valid_lft forever preferred_lft forever
8: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr100i0 state UNKNOWN group default qlen 1000
    link/ether 5e:5d:e3:4f:31:14 brd ff:ff:ff:ff:ff:ff
9: vmbr0v2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 6a:8a:61:c8:dd:2c brd ff:ff:ff:ff:ff:ff
10: eno1.2@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v2 state UP group default qlen 1000
    link/ether e0:be:03:18:2a:68 brd ff:ff:ff:ff:ff:ff
11: fwbr100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9e:13:69:c1:c2:66 brd ff:ff:ff:ff:ff:ff
12: fwpr100p0@fwln100i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v2 state UP group default qlen 1000
    link/ether 7a:71:c2:f7:36:bd brd ff:ff:ff:ff:ff:ff
13: fwln100i0@fwpr100p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP group default qlen 1000
    link/ether ae:ca:44:9f:e7:db brd ff:ff:ff:ff:ff:ff
14: veth102i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr102i0 state UP group default qlen 1000
    link/ether fe:29:b9:d1:d3:62 brd ff:ff:ff:ff:ff:ff link-netnsid 0
15: fwbr102i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f2:8b:9c:04:3b:d3 brd ff:ff:ff:ff:ff:ff
16: fwpr102p0@fwln102i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0v2 state UP group default qlen 1000
    link/ether fe:2c:b1:e1:d2:83 brd ff:ff:ff:ff:ff:ff
17: fwln102i0@fwpr102p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr102i0 state UP group default qlen 1000
    link/ether 02:fd:42:4a:93:ba brd ff:ff:ff:ff:ff:ff
Code:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 172.16.0.96/24
        gateway 172.16.0.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

auto vmbr0.2
iface vmbr0.2 inet static
        address 192.168.1.96/24
#NVR

auto vmbr0.5
iface vmbr0.5 inet static
        address 172.16.0.96/24
        gateway 172.16.0.230
#Corporate

auto vmbr0.15
iface vmbr0.15 inet static
        address 172.16.15.96/24
#Management

Update: After making an edit to the node networking configuration in the web interface (specifically, removing the gateway from vmbr0.5) and applying the configuration changes, the networking started to work as expected again. I then re-added my gateway in the config and refreshed the web interface and the changes were reflected, and still working.

Seem like for some reason the config just isn't loading, even after several reboots of the Node.
 
Last edited:
on my first PVE8 upgrade the Bookworm decided to rename all Interfaces and therefore the VMbridges did not work ... had ro go to the machine physically and fix the main vmbridge to get access again...
i´ll post that story inside a separate threat.
 
I had the same issue with the network after I updated from 7.4 to 8 with 1 of 3 of my servers that had a SFP+ card, the interface name changed so after updating the interface in /etc/network/interfaces with the one I saw when using the "ip addr" command, and restarted the server, everything worked.
Note: when you run "ip addr" and you see something like this with an "altname" :

6: ens1f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000 link/ether 45:54:45:23:46:0c brd ff:ff:ff:ff:ff:ff altname enp101s0f0

Do not use the interface name label after "altname" in the /etc/network/interfaces config, instead use the one at the beginning line. in the example above would be "ens1f0". This is what fixed my network connection.
 
Hi, I may encounter the same issue after tryig to upgrade to Proxmox 8.1

when i execute systemctl status networking.service I have the error :
error: vmbr0: bridge port enp1s0 does not exists

However, when I execute ip addr I don't see any port to change for :
Code:
lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00;00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
        valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
        valid_lft forever preferred_lft forever
wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 4c:eb:bd:af:14:1f brd ff:ff:ff:ff:ff:ff
vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether e2:59:77:c4:7a:2a brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.99/24 scope global vmbr0
        valid_lft forever preferred_lft forever
    inet 6fe80:e059:77ff:fec4:7a2a/64 scope link
        valid_lft forever preferred_lft forever

network interfaces is setup as following :
Code:
auto lo
iface lo inet loopback

iface enp1s0 inet manual

auto vmbr0
iface vmbr0 inet static
    adress 192.168.1.99/24
    gateway 192.168.1.254
    bridge ports enp1s0
    bridge-stp off
    bridge-fd 0
    
iface wlp2s0 inet manual
 
Hi, I may encounter the same issue after tryig to upgrade to Proxmox 8.1

when i execute systemctl status networking.service I have the error :


However, when I execute ip addr I don't see any port to change for :
Code:
lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00;00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
        valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
        valid_lft forever preferred_lft forever
wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 4c:eb:bd:af:14:1f brd ff:ff:ff:ff:ff:ff
vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether e2:59:77:c4:7a:2a brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.99/24 scope global vmbr0
        valid_lft forever preferred_lft forever
    inet 6fe80:e059:77ff:fec4:7a2a/64 scope link
        valid_lft forever preferred_lft forever

network interfaces is setup as following :
Code:
auto lo
iface lo inet loopback

iface enp1s0 inet manual

auto vmbr0
iface vmbr0 inet static
    adress 192.168.1.99/24
    gateway 192.168.1.254
    bridge ports enp1s0
    bridge-stp off
    bridge-fd 0
   
iface wlp2s0 inet manual
try running this:
systemctl restart pveproxy.service
 
try running this:
systemctl restart pveproxy.service
So I just realized that restarting the proxy service restored networking on PVE, but not individual servers. They still don't have networking outside of themselves.
 
Thank you for your help.
I found what was causing the issue with this post : https://forum.proxmox.com/threads/proxmox-ve-8-1-released.136960/page-5#post-609522
Everything is working perfectly now
I tried the fix that was described there but it didn't help.

I actually have an Intel NIC, but the effected VM's are running as LXC and are showing a realtek NIC.

I have internet on PVE, but no inter-connectivity as if a firewall was blocking, and no networking at all on LXC VM's

Anyone have any ideas?
 
Hi Folks,

I thought I would add here, as I came across the same issue (it appears), as some have described here - specifically, after upgrading, networking failed to come back. (There were no interface changes, etc)

I found an issue with ifupdown2, specifically /usr/share/ifupdown2/__main__.py had lost it's execute permissions; thus `networking` was silently failing; so in short:
Code:
chmod +x /usr/share/ifupdown2/__main__.py

Followed by a restart of the networking service resolved the issue for me.

Kind Regards,
Ashley
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!