No network after upgrade to Proxmox 6

Fug1

Active Member
Mar 27, 2019
15
3
43
39
Took the plunge and started upgrading my cluster to Proxmox 6. I upgraded one node in my 3-node cluster.

After the reboot, my network won't come up. It's possible the upgrade wasn't completed successfully, since the upgrade finished with the message...

Code:
Running hook script 'zz-pve-efiboot'..

Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..

No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.

Processing triggers for libgdk-pixbuf2.0-0:amd64 (2.38.1+dfsg-1) ...

Processing triggers for pve-ha-manager (3.0-2) ...

W: Operation was interrupted before it could finish

Here is my /etc/network/interfaces file. Any ideas what might be wrong here?

Code:
auto lo
iface lo inet loopback

iface eno1 inet manual

iface eno2 inet manual

iface enp2s0f0 inet manual

iface enp2s0f1 inet manual

auto vmbr0
iface vmbr0 inet static
        address  192.168.15.34
        netmask  255.255.255.0
        gateway  192.168.15.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094

auto vmbr2
iface vmbr2 inet static
        address  192.168.16.34
        netmask  255.255.255.0
        bridge-ports enp2s0f0
        bridge-stp off
        bridge-fd 0
 
Please post the output of 'pveversion -v', 'ip a' and the journal since the boot ('journalctl -b').
 
Output of 'pveversion -v':

Code:
proxmox-ve: 6.0-2 (running kernel: 5.0.21-1-pve)
pve-manager: 6.0-6 (running version: 6.0-6/c71f879f)
pve-kernel-5.0: 6.0-7
pve-kernel-helper: 6.0-7
pve-kernel-4.15: 5.4-8
pve-kernel-5.0.21-1-pve: 5.0.21-1
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.10.17-2-pve: 4.10.17-20
ceph: 12.2.12-pve1
ceph-fuse: 12.2.12-pve1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.11-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-4
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-7
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-64
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-7
pve-cluster: 6.0-5
pve-container: 3.0-5
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-7
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve2


Output of 'ip a': (Note: I manually enabled eno2 to get the server on the network...eno2 normally isn't used by Proxmox)

Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp2s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:8c:fa:0c:ac:f8 brd ff:ff:ff:ff:ff:ff
3: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:8c:fa:04:26:18 brd ff:ff:ff:ff:ff:ff
4: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:8c:fa:04:26:19 brd ff:ff:ff:ff:ff:ff
    inet 192.168.15.253/24 brd 192.168.15.255 scope global dynamic eno2
       valid_lft 4892sec preferred_lft 4892sec
    inet6 fe80::28c:faff:fe04:2619/64 scope link
       valid_lft forever preferred_lft forever
5: enp2s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:8c:fa:0c:ac:f9 brd ff:ff:ff:ff:ff:ff
 

Attachments

  • journalctl.txt
    276.9 KB · Views: 7
Maybe a timing issue?

If I manually run 'ifup vmbr0', the network comes up successfully.

Seems to be the same problem @kukachik reported in that thread. I have the same hardware (Dell C6100).
 
This is somehow working now. I changed some IPMI settings in the BIOS, and it just started working. Reverted back the changes to see what might have solved the problem, but it's still working. So I guess I'm not sure why the network wasn't coming up initially. But all is well now.

I've upgraded the rest of the nodes in this server successfully. The other nodes did not experience this issue.
 
Glad it worked out!
Any chance you remember which settings you changed in the BIOS (maybe others with a similar problem can profit from that information)?

In any case please mark the thread as 'SOLVED' - it helps in knowing what to expect

Thanks!
 
No, I was playing around with the IPMI and network boot settings. I've changed every combination back to see if I could reproduce, but unfortunately I can't.
 
I still seem to be having this problem after a reboot, but it's a bit sporadic. When the network fails to come up on a node, I have to manually start it with `ifup vmbr0`.
 
hm - check the logs from the boot where the problem showed up (`journalctl -b` for the current boot, `journalctl -b -1` for the one before that)

do you have a running DHCP-server in the network where your Node is connected?
 
The journalctl further up in the thread should still be valid, and I'm also attaching a new one here. I can't see anything obvious. Might be udev related, but I don't know enough about that to troubleshoot further.

Yes, there is a running DHCP server on the network but none of the IP addresses in question here are managed with DHCP. The IPMI address is statically configured in the BIOS, and the other addresses are configured in Proxmox.
 

Attachments

  • journalctl.txt
    160 KB · Views: 4
Same problem here:

ip a

before update:
1: lo:
2: enp2s0:
3: eno1:
4: vmbr0:

after update:
1: lo:
2: eno0:
3: eno1:
4: vmbr0:
6: tap106i0:

So the name changed from enp2s0 to eno0 but /etc/network/interfaces was not updated.
So simply editing /etc/network/interfaces solved our problem.

There's no reason why this shouldn't be in the upgrade manual! It happened to me too. Thank you so much for writing it out clearly.
 
on my dell c6100 cluster (starting with proxmox 6.1) i had to

blacklist ipmi_si

module to prevent impi/udev amok.

after blacklistening i've got networking and clean reboot back.
 
There's no reason why this shouldn't be in the upgrade manual! It happened to me too. Thank you so much for writing it out clearly.
Same here, HP DL servers - would have been helpful to have been in the Known Issues.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!