[SOLVED] No network after Proxmox kernel upgrade

snpz

Well-Known Member
Mar 18, 2013
36
4
48
Long story short:
Did apt upgrades to first server of my 6 server Proxmox 6.3 cluster.
Exact versions:

pveversion --verbose
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph: 15.2.8-pve2
ceph-fuse: 15.2.8-pve2
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.2-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-3
libpve-guest-common-perl: 3.1-4
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.8-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-5
pve-cluster: 6.2-1
pve-container: 3.3-3
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-1
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.1.0-8
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-5
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.5-pve1

To:
pveversion --verbose
proxmox-ve: 6.3-1 (running kernel: 5.4.103-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-7
pve-kernel-helper: 6.3-7
pve-kernel-5.4.103-1-pve: 5.4.103-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
pve-kernel-5.4.73-1-pve: 5.4.73-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph: 15.2.9-pve1
ceph-fuse: 15.2.9-pve1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.0.10-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-6
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.2.0-3
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-8
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.3-pve2

After that one of network intefaces did not get up any more in my bond setup! After rebooted back to previous kernel 5.4.78-2-pve, everything works as expected!
I have two Intel X710 SFP+ NIC into the server.
03:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
03:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)

With kernel 5.4.103-1 one of interfaces:
Slave Interface: enp66s0f0
MII Status: down
Speed: Unknown
Duplex: Unknown

With pve-kernel-5.4.78-2-pve:
Slave Interface: enp66s0f0
MII Status: up
Speed: 10000 Mbps
Duplex: full

Can anyone explain what has changed in pve-kernel-5.4.103-1-pve?
 
Last edited:
Here you go! Beside I'm back with kernel version 5.4.78-2-pve.
What i tried:
disabled all vmbr and VLANs, so only physical interfaces and bonds are enabled. Still the same with new kernel and with the old one all interfaces are up. N3K debug log has no error messages or something like that!

Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ec:f4:bb:cc:50:98 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ec:f4:bb:cc:50:99 brd ff:ff:ff:ff:ff:ff
4: enp3s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 00:1b:21:c0:a8:ac brd ff:ff:ff:ff:ff:ff
5: eno3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ec:f4:bb:cc:50:9a brd ff:ff:ff:ff:ff:ff
6: eno4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether ec:f4:bb:cc:50:9b brd ff:ff:ff:ff:ff:ff
    inet 10.222.9.12/24 scope global eno4
       valid_lft forever preferred_lft forever
    inet6 fe80::eef4:bbff:fecc:509b/64 scope link
       valid_lft forever preferred_lft forever
7: enp3s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether 00:1b:21:c0:a8:ae brd ff:ff:ff:ff:ff:ff
8: enp66s0f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 00:1b:21:c0:a8:ac brd ff:ff:ff:ff:ff:ff
9: enp66s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether 00:1b:21:c0:a8:ae brd ff:ff:ff:ff:ff:ff
10: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:1b:21:c0:a8:ac brd ff:ff:ff:ff:ff:ff
    inet6 fe80::21b:21ff:fec0:a8ac/64 scope link
       valid_lft forever preferred_lft forever
11: bond0.11@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:1b:21:c0:a8:ac brd ff:ff:ff:ff:ff:ff
    inet 10.222.11.12/24 scope global bond0.11
       valid_lft forever preferred_lft forever
    inet6 fe80::21b:21ff:fec0:a8ac/64 scope link
       valid_lft forever preferred_lft forever
12: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:1b:21:c0:a8:ae brd ff:ff:ff:ff:ff:ff
    inet6 fe80::21b:21ff:fec0:a8ae/64 scope link
       valid_lft forever preferred_lft forever
13: bond1.12@bond1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:1b:21:c0:a8:ae brd ff:ff:ff:ff:ff:ff
    inet 10.222.12.12/24 scope global bond1.12
       valid_lft forever preferred_lft forever
    inet6 fe80::21b:21ff:fec0:a8ae/64 scope link
       valid_lft forever preferred_lft forever
14: bond0.13@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 00:1b:21:c0:a8:ac brd ff:ff:ff:ff:ff:ff
15: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:1b:21:c0:a8:ac brd ff:ff:ff:ff:ff:ff
    inet 10.222.13.12/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::21b:21ff:fec0:a8ac/64 scope link
       valid_lft forever preferred_lft forever
16: bond0.10@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr1 state UP group default qlen 1000
    link/ether 00:1b:21:c0:a8:ac brd ff:ff:ff:ff:ff:ff
17: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:1b:21:c0:a8:ac brd ff:ff:ff:ff:ff:ff
    inet6 fe80::21b:21ff:fec0:a8ac/64 scope link
       valid_lft forever preferred_lft forever
18: bond0.14@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr2 state UP group default qlen 1000
    link/ether 00:1b:21:c0:a8:ac brd ff:ff:ff:ff:ff:ff
19: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:1b:21:c0:a8:ac brd ff:ff:ff:ff:ff:ff
    inet6 fe80::21b:21ff:fec0:a8ac/64 scope link
       valid_lft forever preferred_lft forever
20: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether f6:3d:15:a8:75:f6 brd ff:ff:ff:ff:ff:ff
21: tap100i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether ae:e2:3b:10:14:05 brd ff:ff:ff:ff:ff:ff
22: tap100i2: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr2 state UNKNOWN group default qlen 1000
    link/ether f6:1c:37:b7:e4:8b brd ff:ff:ff:ff:ff:ff
23: veth101i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:35:57:c8:05:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
 
Did some testing and it gets pretty interesting:
Code:
dmesg | grep gb                                                                                                                                                             [22:44:01]
[    2.830801] igb: Intel(R) Gigabit Ethernet Network Driver - version 5.6.0-k
[    2.830803] igb: Copyright (c) 2007-2014 Intel Corporation.
[    2.835061] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 5.1.0-k
[    2.835063] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[    2.905014] igb 0000:01:00.0: added PHC on eth0
[    2.905017] igb 0000:01:00.0: Intel(R) Gigabit Ethernet Network Connection
[    2.905020] igb 0000:01:00.0: eth0: (PCIe:5.0Gb/s:Width x4) ec:f4:bb:cc:50:98
[    2.905312] igb 0000:01:00.0: eth0: PBA No: G10565-000
[    2.905315] igb 0000:01:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s)
[    2.954848] igb 0000:01:00.1: added PHC on eth1
[    2.954851] igb 0000:01:00.1: Intel(R) Gigabit Ethernet Network Connection
[    2.954854] igb 0000:01:00.1: eth1: (PCIe:5.0Gb/s:Width x4) ec:f4:bb:cc:50:99
[    2.955146] igb 0000:01:00.1: eth1: PBA No: G10565-000
[    2.955149] igb 0000:01:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s)
[    3.013607] ixgbe 0000:03:00.0: Multiqueue Enabled: Rx Queue count = 32, Tx Queue count = 32 XDP Queue count = 0
[    3.013916] ixgbe 0000:03:00.0: 32.000 Gb/s available PCIe bandwidth (5 GT/s x8 link)
[    3.014044] ixgbe 0000:03:00.0: MAC: 2, PHY: 20, SFP+: 5, PBA No: Unknown
[    3.014049] ixgbe 0000:03:00.0: 00:1b:21:c0:a8:ac
[    3.016986] ixgbe 0000:03:00.0: Intel(R) 10 Gigabit Network Connection
[    3.017090] libphy: ixgbe-mdio: probed
[    3.047399] igb 0000:01:00.2: added PHC on eth3
[    3.047402] igb 0000:01:00.2: Intel(R) Gigabit Ethernet Network Connection
[    3.047406] igb 0000:01:00.2: eth3: (PCIe:5.0Gb/s:Width x4) ec:f4:bb:cc:50:9a
[    3.047697] igb 0000:01:00.2: eth3: PBA No: G10565-000
[    3.047700] igb 0000:01:00.2: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s)
[    3.109222] igb 0000:01:00.3: added PHC on eth4
[    3.109225] igb 0000:01:00.3: Intel(R) Gigabit Ethernet Network Connection
[    3.109228] igb 0000:01:00.3: eth4: (PCIe:5.0Gb/s:Width x4) ec:f4:bb:cc:50:9b
[    3.109519] igb 0000:01:00.3: eth4: PBA No: G10565-000
[    3.109524] igb 0000:01:00.3: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s)
[    3.136037] igb 0000:01:00.0 eno1: renamed from eth0
[    3.224351] igb 0000:01:00.1 eno2: renamed from eth1
[    3.225172] ixgbe 0000:03:00.1: Multiqueue Enabled: Rx Queue count = 32, Tx Queue count = 32 XDP Queue count = 0
[    3.225482] ixgbe 0000:03:00.1: 32.000 Gb/s available PCIe bandwidth (5 GT/s x8 link)
[    3.225609] ixgbe 0000:03:00.1: MAC: 2, PHY: 20, SFP+: 6, PBA No: Unknown
[    3.225615] ixgbe 0000:03:00.1: 00:1b:21:c0:a8:ae
[    3.398760] ixgbe 0000:03:00.1: Intel(R) 10 Gigabit Network Connection
[    3.398815] libphy: ixgbe-mdio: probed
[    3.398821] igb 0000:01:00.2 eno3: renamed from eth3
[    3.432349] igb 0000:01:00.3 eno4: renamed from eth4
[    3.565561] ixgbe 0000:42:00.0: Multiqueue Enabled: Rx Queue count = 32, Tx Queue count = 32 XDP Queue count = 0
[    3.565870] ixgbe 0000:42:00.0: 32.000 Gb/s available PCIe bandwidth (5 GT/s x8 link)
[    3.566001] ixgbe 0000:42:00.0: MAC: 2, PHY: 20, SFP+: 5, PBA No: Unknown
[    3.566007] ixgbe 0000:42:00.0: 00:1b:21:c1:3c:fc
[    3.568326] ixgbe 0000:42:00.0: Intel(R) 10 Gigabit Network Connection
[    3.568430] libphy: ixgbe-mdio: probed
[    3.733583] ixgbe 0000:42:00.1: Multiqueue Enabled: Rx Queue count = 32, Tx Queue count = 32 XDP Queue count = 0
[    3.733886] ixgbe 0000:42:00.1: 32.000 Gb/s available PCIe bandwidth (5 GT/s x8 link)
[    3.734012] ixgbe 0000:42:00.1: MAC: 2, PHY: 20, SFP+: 6, PBA No: Unknown
[    3.734014] ixgbe 0000:42:00.1: 00:1b:21:c1:3c:fe
[    3.736260] ixgbe 0000:42:00.1: Intel(R) 10 Gigabit Network Connection
[    3.736314] libphy: ixgbe-mdio: probed
[    3.738846] ixgbe 0000:03:00.0 enp3s0f0: renamed from eth2
[    3.776379] ixgbe 0000:03:00.1 enp3s0f1: renamed from eth0
[    3.848742] ixgbe 0000:42:00.1 enp66s0f1: renamed from eth3
[    3.900464] ixgbe 0000:42:00.0 enp66s0f0: renamed from eth1

And like this it stops! Interfaces stays like this. I i restart networking service later (in my case, after ~30sec) manually, all network interfaces gets UP:
Code:
[   34.986453] ixgbe 0000:42:00.0: registered PHC device on enp66s0f0
[   35.156806] ixgbe 0000:42:00.0 enp66s0f0: detected SFP+: 5
[   35.360943] ixgbe 0000:42:00.1: registered PHC device on enp66s0f1
[   35.665428] ixgbe 0000:03:00.0: registered PHC device on enp3s0f0
[   35.851960] ixgbe 0000:42:00.0 enp66s0f0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[   35.920005] ixgbe 0000:42:00.1 enp66s0f1: detected SFP+: 6
[   35.971899] ixgbe 0000:03:00.1: registered PHC device on enp3s0f1
[   36.169573] ixgbe 0000:03:00.0: removed PHC on enp3s0f0
[   36.389514] ixgbe 0000:03:00.0: registered PHC device on enp3s0f0
[   36.505882] ixgbe 0000:42:00.0: removed PHC on enp66s0f0
[   36.615947] ixgbe 0000:42:00.1 enp66s0f1: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[   36.676721] ixgbe 0000:03:00.0 enp3s0f0: detected SFP+: 5
[   36.733485] ixgbe 0000:42:00.0: registered PHC device on enp66s0f0
[   37.316574] ixgbe 0000:03:00.1: removed PHC on enp3s0f1
[   37.371937] ixgbe 0000:03:00.0 enp3s0f0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[   37.439881] ixgbe 0000:42:00.0 enp66s0f0: detected SFP+: 5
[   37.541482] ixgbe 0000:03:00.1: registered PHC device on enp3s0f1
[   37.665390] ixgbe 0000:42:00.1: removed PHC on enp66s0f1
[   37.892391] ixgbe 0000:42:00.1: registered PHC device on enp66s0f1
[   38.124245] igb 0000:01:00.3 eno4: igb: eno4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[   38.135921] ixgbe 0000:42:00.0 enp66s0f0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[   38.203898] ixgbe 0000:03:00.1 enp3s0f1: detected SFP+: 6
[   38.899906] ixgbe 0000:03:00.1 enp3s0f1: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[   38.967910] ixgbe 0000:42:00.1 enp66s0f1: detected SFP+: 6
[   39.663899] ixgbe 0000:42:00.1 enp66s0f1: NIC Link is Up 10 Gbps, Flow Control: RX/TX

Right now workaround is to run a script during boot process that gets info about status of interfaces and if one of them is DOWN, just restarts networking service.
But i guess this is not the way how it should work.
Maybe this is because we have 8 netwok interfaces in servers (4x1gbit and 4x10gbit SFP+) and there is some kind of initialization process and sequences that does not work as expected?
 
Last edited:
Problem solved!
1. We have X520-DA2 and X710-DA2 ethernet controllers in servers;
2. pve-kernel-5.4 uses ixgbe v.5.1.0-k for X520-DA2 and i40e v.2.8.20-k for X710-DA2;
3. Latest ixgbe driver version available @intel web site is 5.11.3;
4. Latest i40e driver version available @intel web site is 2.14.13;
5. Updated drivers and problem is gone.
This pve-5.4 kernel uses intel modules from previous decade, so i would suggest to keep more or less updated driver versions in PVE kernel.
i40e is 8 versions behind the actual one and ixgbe is so old, that i cant even find that old in intel web site.
 
Last edited:
I guess you could open a bug report and force upgrade of drivers that way?
Maybe we need to wait for Ubuntu to update in theirs and bumb should be made there.
Anyway, open a bug report, i probably will do more than a forum post.
 
  • Like
Reactions: snpz
Problem solved!
1. We have X520-DA2 and X710-DA2 ethernet controllers in servers;
2. pve-kernel-5.4 uses ixgbe v.5.1.0-k for X520-DA2 and i40e v.2.8.20-k for X710-DA2;
3. Latest ixgbe driver version available @intel web site is 5.11.3;
4. Latest i40e driver version available @intel web site is 2.14.13;
5. Updated drivers and problem is gone.
This pve-5.4 kernel uses intel modules from previous decade, so i would suggest to keep more or less updated driver versions in PVE kernel.
i40e is 8 versions behind the actual one and ixgbe is so old, that i cant even find that old in intel web site.

So why did it work before 6.3? I guess they are not downgrading the drivers?
 
Actually nobody checked if all IFs are up at the beginning. They are bonded, so all the time at least one IF was up. Only latter i found out on switch, that a random IF is down after reboot.
One more thing i did - upgraded FW of the NIC to the latest Intel provided FW.
 
  • Like
Reactions: jsterr
Actually nobody checked if all IFs are up at the beginning. They are bonded, so all the time at least one IF was up. Only latter i found out on switch, that a random IF is down after reboot.
One more thing i did - upgraded FW of the NIC to the latest Intel provided FW.
Are you having supermicro board? Just asking if you could share the motherboard type? Could be a problem with H11DPI - we had similiar problem today and it was resolved by using a different pcie slot for x710.
 
No, Dell servers are used in our case.
Still - first check, using ethtool -i $interface_name, X710 FW and driver version. I suggest to have version 8.30 and driver version at least 2.14.13.
I had 6.01 FW version and 2.8.40 driver version at the beginning!
 
  • Like
Reactions: jsterr
Actually nobody checked if all IFs are up at the beginning. They are bonded, so all the time at least one IF was up. Only latter i found out on switch, that a random IF is down after reboot.
One more thing i did - upgraded FW of the NIC to the latest Intel provided FW.
I would like to ask how to upgrade the driver for the network card under proxmox, thank you very much.
 
Hi!
First of all download newest driver from Intel. For example for x710 https://downloadcenter.intel.com/do...bit-Ethernet-Network-Connections-under-Linux-
Unpack it, install kernel-headers, development-tools, gcc, etc.
cd i40e-<x.x.x>/src/
make install
rmmod i40e; modprobe i40e

But i would recommend to check interface firmware version as well. If it is old enough, upgrade it as well.
In x710 case, latest FW is: https://downloadcenter.intel.com/do...for-Intel-Ethernet-Adapters-700-Series-Linux-
Download it, unpack it and run nvmupdate64e. Folow instructions and you are done!
 
  • Like
Reactions: wuwzy
Hi!
First of all download newest driver from Intel. For example for x710 https://downloadcenter.intel.com/do...bit-Ethernet-Network-Connections-under-Linux-
Unpack it, install kernel-headers, development-tools, gcc, etc.
cd i40e-<x.x.x>/src/
make install
rmmod i40e; modprobe i40e

But i would recommend to check interface firmware version as well. If it is old enough, upgrade it as well.
In x710 case, latest FW is: https://downloadcenter.intel.com/do...for-Intel-Ethernet-Adapters-700-Series-Linux-
Download it, unpack it and run nvmupdate64e. Folow instructions and you are done!
thanks. bro. i will try....
 
  • Like
Reactions: snpz
Brand new install of proxmox 7.1 UEFI with ZFS root on my ASUS x570 with an intel nic i211, got the same issue.
If i down and then up the interface 30+ seconds after boot (from the console) it all starts working.

Gonna hold off sawpping out my i5 as i have no ILO and don't wanna rely on a a cron job to get my network running after a reboot

Code:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-8 (running version: 7.1-8/5b267f33)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.13.19-1-pve: 5.13.19-3
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-2
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-1
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3
 
How about NIC driver and FW version?
root@pve1:~# ethtool -i enp3s0
driver: igb
version: 5.13.19-2-pve
firmware-version: 0. 6-1
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

Just about to install the intel driver/firmware from their website.

Your right, fixed for me as well. Here is a more copy and pasta guide that worked for me.

Bash:
apt install pve-headers-$(uname -r)
cd /root/
wget https://downloadmirror.intel.com/682701/igb-5.8.5.tar.gz
sha1sum igb-5.8.5.tar.gz
tar zxf igb-5.8.5.tar.gz
cd igb-5.8.5/src/
make install
reboot
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!