Part-networking lost on 5.4.128-1-pve

mrapajic

New Member
Dec 20, 2019
18
0
1
Hi,

we are using proxmox 6.4 on ProLiant BL460c Gen9 servers. We upgraded our test server with the newest packages and kernel (from 5.4.126-1-pve -> 5.4.128-1-pve) and we lost 6/8 networking interfaces.

Code:
[Thu Aug 12 10:46:55 2021] bnx2x: QLogic 5771x/578xx 10/20-Gigabit Ethernet Driver bnx2x 1.713.36-0 (2014/02/10)
[Thu Aug 12 10:46:55 2021] bnx2x 0000:06:00.0: msix capability found
[Thu Aug 12 10:46:55 2021] bnx2x 0000:06:00.0: part number 0-0-0-0
[Thu Aug 12 10:46:56 2021] bnx2x 0000:06:00.0: 63.008 Gb/s available PCIe bandwidth (8 GT/s x8 link)
[Thu Aug 12 10:46:56 2021] bnx2x 0000:06:00.1: msix capability found
[Thu Aug 12 10:46:56 2021] bnx2x 0000:06:00.1: part number 0-0-0-0
[Thu Aug 12 10:46:56 2021] bnx2x 0000:06:00.1: 63.008 Gb/s available PCIe bandwidth (8 GT/s x8 link)
[Thu Aug 12 10:46:56 2021] bnx2x 0000:06:00.2: msix capability found
[Thu Aug 12 10:46:56 2021] bnx2x 0000:06:00.2: part number 0-0-0-0
[Thu Aug 12 10:46:56 2021] bnx2x: probe of 0000:06:00.2 failed with error -22
[Thu Aug 12 10:46:56 2021] bnx2x 0000:06:00.3: msix capability found
[Thu Aug 12 10:46:56 2021] bnx2x 0000:06:00.3: part number 0-0-0-0
[Thu Aug 12 10:46:56 2021] bnx2x: probe of 0000:06:00.3 failed with error -22
[Thu Aug 12 10:46:56 2021] bnx2x 0000:06:00.4: msix capability found
[Thu Aug 12 10:46:56 2021] bnx2x 0000:06:00.4: part number 0-0-0-0
[Thu Aug 12 10:46:56 2021] bnx2x: probe of 0000:06:00.4 failed with error -22
[Thu Aug 12 10:46:56 2021] bnx2x 0000:06:00.5: msix capability found
[Thu Aug 12 10:46:56 2021] bnx2x 0000:06:00.5: part number 0-0-0-0
[Thu Aug 12 10:46:57 2021] bnx2x: probe of 0000:06:00.5 failed with error -22
[Thu Aug 12 10:46:57 2021] bnx2x 0000:06:00.6: msix capability found
[Thu Aug 12 10:46:57 2021] bnx2x 0000:06:00.6: part number 0-0-0-0
[Thu Aug 12 10:46:57 2021] bnx2x: probe of 0000:06:00.6 failed with error -22
[Thu Aug 12 10:46:57 2021] bnx2x 0000:06:00.7: msix capability found
[Thu Aug 12 10:46:57 2021] bnx2x 0000:06:00.7: part number 0-0-0-0
[Thu Aug 12 10:46:57 2021] bnx2x: probe of 0000:06:00.7 failed with error -22
[Thu Aug 12 10:46:57 2021] bnx2x 0000:06:00.1 eno50: renamed from eth1
[Thu Aug 12 10:46:57 2021] bnx2x 0000:06:00.0 eno49: renamed from eth0
[Thu Aug 12 10:47:02 2021] bnx2x 0000:06:00.0 eno49: using MSI-X  IRQs: sp 105  fp[0] 107 ... fp[7] 114
[Thu Aug 12 10:47:03 2021] bnx2x 0000:06:00.1 eno50: using MSI-X  IRQs: sp 115  fp[0] 117 ... fp[7] 124
[Thu Aug 12 10:47:03 2021] bnx2x 0000:06:00.0 eno49: NIC Link is Up, 20000 Mbps full duplex, Flow control: ON - receive & transmit
[Thu Aug 12 10:47:04 2021] bnx2x 0000:06:00.1 eno50: NIC Link is Up, 20000 Mbps full duplex, Flow control: ON - receive & transmit
[Thu Aug 12 10:47:04 2021] bnx2x 0000:06:00.0 eno49: using MSI-X  IRQs: sp 105  fp[0] 107 ... fp[7] 114
[Thu Aug 12 10:47:05 2021] bnx2x 0000:06:00.0 eno49: NIC Link is Up, 20000 Mbps full duplex, Flow control: ON - receive & transmit
[Thu Aug 12 10:47:05 2021] bnx2x 0000:06:00.1 eno50: using MSI-X  IRQs: sp 115  fp[0] 117 ... fp[7] 124
[Thu Aug 12 10:47:06 2021] bnx2x 0000:06:00.1 eno50: NIC Link is Up, 20000 Mbps full duplex, Flow control: ON - receive & transmit
[Thu Aug 12 10:47:07 2021] bnx2x 0000:06:00.0 eno49: using MSI-X  IRQs: sp 105  fp[0] 107 ... fp[7] 114
[Thu Aug 12 10:47:08 2021] bnx2x 0000:06:00.0 eno49: NIC Link is Up, 20000 Mbps full duplex, Flow control: ON - receive & transmit

PCI devices are visible:

Code:
# lspci | grep Ethernet
06:00.0 Ethernet controller: Broadcom Limited BCM57840 NetXtreme II 10/20-Gigabit Ethernet (rev 11)
06:00.1 Ethernet controller: Broadcom Limited BCM57840 NetXtreme II 10/20-Gigabit Ethernet (rev 11)
06:00.2 Ethernet controller: Broadcom Limited BCM57840 NetXtreme II 10/20-Gigabit Ethernet (rev 11)
06:00.3 Ethernet controller: Broadcom Limited BCM57840 NetXtreme II 10/20-Gigabit Ethernet (rev 11)
06:00.4 Ethernet controller: Broadcom Limited BCM57840 NetXtreme II 10/20-Gigabit Ethernet (rev 11)
06:00.5 Ethernet controller: Broadcom Limited BCM57840 NetXtreme II 10/20-Gigabit Ethernet (rev 11)
06:00.6 Ethernet controller: Broadcom Limited BCM57840 NetXtreme II 10/20-Gigabit Ethernet (rev 11)
06:00.7 Ethernet controller: Broadcom Limited BCM57840 NetXtreme II 10/20-Gigabit Ethernet (rev 11)

Pve version after upgrade:

Code:
proxmox-ve: 6.4-1 (running kernel: 5.4.128-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-5
pve-kernel-helper: 6.4-5
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-4.15: 5.4-8
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-1-pve: 5.3.18-1
pve-kernel-5.3.13-2-pve: 5.3.13-2
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-9-pve: 4.15.18-30
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.12-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.5-pve1~bpo10+1


When I booted back to 5.4.126-1-pve, all of the interfaces, bonds and vlans work normally. Do you have an idea what kernel change from 126 -> 128 is affecting this problem?

Thnx

Michael


EDIT 17.08.2021 - Tried an upgrade to pve7, same problem

Code:
[Tue Aug 17 15:30:40 2021] bnx2x 0000:06:00.0: msix capability found
[Tue Aug 17 15:30:40 2021] bnx2x 0000:06:00.0: part number 0-0-0-0
[Tue Aug 17 15:30:40 2021] bnx2x 0000:06:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link)
[Tue Aug 17 15:30:40 2021] bnx2x 0000:06:00.1: msix capability found
[Tue Aug 17 15:30:40 2021] bnx2x 0000:06:00.1: part number 0-0-0-0
[Tue Aug 17 15:30:40 2021] bnx2x 0000:06:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link)
[Tue Aug 17 15:30:40 2021] bnx2x 0000:06:00.2: msix capability found
[Tue Aug 17 15:30:40 2021] bnx2x 0000:06:00.2: part number 0-0-0-0
[Tue Aug 17 15:30:41 2021] bnx2x: probe of 0000:06:00.2 failed with error -22
[Tue Aug 17 15:30:41 2021] bnx2x 0000:06:00.3: msix capability found
[Tue Aug 17 15:30:41 2021] bnx2x 0000:06:00.3: part number 0-0-0-0
[Tue Aug 17 15:30:41 2021] bnx2x: probe of 0000:06:00.3 failed with error -22
[Tue Aug 17 15:30:41 2021] bnx2x 0000:06:00.4: msix capability found
[Tue Aug 17 15:30:41 2021] bnx2x 0000:06:00.4: part number 0-0-0-0
[Tue Aug 17 15:30:41 2021] bnx2x: probe of 0000:06:00.4 failed with error -22
[Tue Aug 17 15:30:41 2021] bnx2x 0000:06:00.5: msix capability found
[Tue Aug 17 15:30:41 2021] bnx2x 0000:06:00.5: part number 0-0-0-0
[Tue Aug 17 15:30:41 2021] bnx2x: probe of 0000:06:00.5 failed with error -22
[Tue Aug 17 15:30:41 2021] bnx2x 0000:06:00.6: msix capability found
[Tue Aug 17 15:30:41 2021] bnx2x 0000:06:00.6: part number 0-0-0-0
[Tue Aug 17 15:30:41 2021] bnx2x: probe of 0000:06:00.6 failed with error -22
[Tue Aug 17 15:30:41 2021] bnx2x 0000:06:00.7: msix capability found
[Tue Aug 17 15:30:41 2021] bnx2x 0000:06:00.7: part number 0-0-0-0
[Tue Aug 17 15:30:41 2021] bnx2x: probe of 0000:06:00.7 failed with error -22
[Tue Aug 17 15:30:41 2021] bnx2x 0000:06:00.1 eno50: renamed from eth1
[Tue Aug 17 15:30:41 2021] bnx2x 0000:06:00.0 eno49: renamed from eth0
[Tue Aug 17 15:30:46 2021] bnx2fc: QLogic FCoE Driver bnx2fc v2.12.13 (October 15, 2015)
[Tue Aug 17 15:30:47 2021] bnx2x 0000:06:00.0 eno49: using MSI-X  IRQs: sp 106  fp[0] 108 ... fp[7] 115
[Tue Aug 17 15:30:48 2021] bnx2x 0000:06:00.1 eno50: using MSI-X  IRQs: sp 116  fp[0] 118 ... fp[7] 125
[Tue Aug 17 15:30:48 2021] bnx2x 0000:06:00.0 eno49: NIC Link is Up, 20000 Mbps full duplex, Flow control: ON - receive & transmit
[Tue Aug 17 15:30:49 2021] bnx2x 0000:06:00.1 eno50: NIC Link is Up, 20000 Mbps full duplex, Flow control: ON - receive & transmit
[Tue Aug 17 15:30:50 2021] bnx2x 0000:06:00.0 eno49: using MSI-X  IRQs: sp 106  fp[0] 108 ... fp[7] 115
[Tue Aug 17 15:30:51 2021] bnx2x 0000:06:00.0 eno49: NIC Link is Up, 20000 Mbps full duplex, Flow control: ON - receive & transmit
[Tue Aug 17 15:30:51 2021] bnx2x 0000:06:00.1 eno50: using MSI-X  IRQs: sp 116  fp[0] 118 ... fp[7] 125
[Tue Aug 17 15:30:52 2021] bnx2x 0000:06:00.1 eno50: NIC Link is Up, 20000 Mbps full duplex, Flow control: ON - receive & transmit




Code:
proxmox-ve: 7.0-2 (running kernel: 5.11.22-3-pve)
pve-manager: 7.0-11 (running version: 7.0-11/63d82f4e)
pve-kernel-5.11: 7.0-6
pve-kernel-helper: 7.0-6
pve-kernel-5.4: 6.4-5
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.11.22-3-pve: 5.11.22-6
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-4.15: 5.4-8
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-1-pve: 5.3.18-1
pve-kernel-5.3.13-2-pve: 5.3.13-2
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-9-pve: 4.15.18-30
ceph-fuse: 14.2.21-1
corosync: 3.1.2-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.21-pve1
libproxmox-acme-perl: 1.2.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-5
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-2
libpve-storage-perl: 7.0-10
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.8-1
proxmox-backup-file-restore: 2.0.8-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.0-9
pve-docs: 7.0-5
pve-edk2-firmware: 3.20200531-1
pve-firewall: 4.2-2
pve-firmware: 3.2-4
pve-ha-manager: 3.3-1
pve-i18n: 2.4-1
pve-qemu-kvm: 6.0.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-13
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve1
 
Last edited:
Hi MrApajic,

Did you get any further with this - I am also having the same issue on a few HPE Blades.

Thanks
 
Hi MrApajic,

Did you get any further with this - I am also having the same issue on a few HPE Blades.

Thanks
HI,

I have filed a bug report https://bugzilla.proxmox.com/show_bug.cgi?id=3585

On this bug report https://bugzilla.proxmox.com/show_bug.cgi?id=3558 workaround is to disable SRV-IO in BIOS generaly and directly on network cards. For the moment we are unable to remediate an issue because our current HP firmware 4.50 has a bug regarding the disable of SRV-IO on network cards (virtual connect adapters) https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-a00008131en_us

We have to upgrade to HPE Virtual Connect firmware version 4.60.

I hope this helped
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!