ixgbe initialize fails

Radapompa

New Member
Feb 27, 2018
12
0
1
35
I have four ports that is trying to use ixgbe driver to load. But when trying to initalize the ports, I get an error message in dmesg. I'm using kernel 4.13.13-6-pve. I'm currently using the ixgbe driver shipped with the installation (5.3.3). But I've also tested latest version aswell, with same error message (5.3.6). Any idea of what I can do to get this working?

dmesg
[ 2.053856] Intel(R) 10GbE PCI Express Linux Network Driver - version 5.3.3
[ 2.053858] Copyright(c) 1999 - 2017 Intel Corporation.
[ 2.053955] ACPI: PCI Interrupt Link [LN58] enabled at IRQ 58
[ 2.054341] igb 0000:23:00.0: Failed to initialize MSI-X interrupts. Falling back to MSI interrupts.
[ 2.054742] ACPI: PCI Interrupt Link [LN60] enabled at IRQ 60
[ 2.054862] ixgbe: probe of 0000:25:00.0 failed with error -5
[ 2.054956] ACPI: PCI Interrupt Link [LN61] enabled at IRQ 61
[ 2.055041] ixgbe: probe of 0000:25:00.1 failed with error -5
[ 2.055127] ACPI: PCI Interrupt Link [LN64] enabled at IRQ 64
[ 2.055208] ixgbe: probe of 0000:26:00.0 failed with error -5
[ 2.055288] ACPI: PCI Interrupt Link [LN65] enabled at IRQ 65
[ 2.055369] ixgbe: probe of 0000:26:00.1 failed with error -5

lspci
25:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
25:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
26:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
26:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
 
I've tested to downgrade the kernel to 4.10.1-2-pve. Which has driver version 5.0.4 of ixgbe. Still the same error. Couldn't find any older kernel in apt. Couldn't compile any older driver either for that kernel.
 
Any thoughts on this? I've confirmed this issue on multiple servers. I created an ticket at Intel driver Sourceforge page. But they said it's a Proxmox issue.
 
We have a preview kernel v4.15 with in-tree intel nic drivers, you may test if it works with those.
https://forum.proxmox.com/threads/4-15-based-test-kernel-for-pve-5-x-available.42097/
Thanks for the tip Alwin. We're doing some migration right now to our new Proxmox environment. So will try it out when we have space available to reinstall another host. Currently we have three hosts in HA, and would like to avoid installing a test kernel on one of them.

Best Regards
Johan
 
The error remains in kernel 4.15.17-1-pve. Any other suggestions on how to solve this?

dmesg |grep ixgbe
[ 2.283232] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 5.1.0-k
[ 2.283233] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[ 2.283802] ixgbe: probe of 0000:25:00.0 failed with error -5
[ 2.284089] ixgbe: probe of 0000:25:00.1 failed with error -5
[ 2.284325] ixgbe: probe of 0000:26:00.0 failed with error -5
[ 2.284556] ixgbe: probe of 0000:26:00.1 failed with error -5

pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)
pve-kernel-4.15: 5.2-1
pve-kernel-4.13: 5.1-44
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.13.16-2-pve: 4.13.16-48
pve-kernel-4.13.13-6-pve: 4.13.13-42
pve-kernel-4.13.13-2-pve: 4.13.13-33
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-31
libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
openvswitch-switch: 2.7.0-2
proxmox-widget-toolkit: 1.0-18
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-4
pve-firewall: 3.0-9
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-5
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-26
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.8-pve1~bpo9
 
The error remains in kernel 4.15.17-1-pve. Any other suggestions on how to solve this?
The in-tree ixgbe module is in version 5.1, you can test with the intel drivers from their website and check the readme, there are some notes about the driver.
 
The in-tree ixgbe module is in version 5.1, you can test with the intel drivers from their website and check the readme, there are some notes about the driver.
Now I don't understand. So you can only use the driver in the 4.15 kernel, if you have Proxmox 5.1? You stated earlier that the driver was in 4.15 kernel, but it's not in Proxmox 5.2? Or do you refer to something else with version 5.1?
 
Last edited:
Yes, the intel driver is in version 5.1 in the v4.15 kernel.
 
mine works perfect
Code:
Linux x9sri 4.15.17-1-pve #1 SMP PVE 4.15.17-9 (Wed, 9 May 2018 13:31:43 +0200)
x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sat May 26 16:50:26 2018 from 192.168.249.61
root@x9sri:~#  dmesg |grep ixgbe
[    1.311374] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 5.1.0-k
[    1.311375] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[    2.019317] ixgbe 0000:02:00.0: Multiqueue Enabled: Rx Queue count = 24, Tx Queue count = 24 XDP Queue count = 0
[    2.061829] ixgbe 0000:02:00.0: PCI Express bandwidth of 32GT/s available
[    2.061830] ixgbe 0000:02:00.0: (Speed:8.0GT/s, Width: x4, Encoding Loss:<2%)
[    2.090039] ixgbe 0000:02:00.0: MAC: 4, PHY: 0, PBA No: 000500-000
[    2.090041] ixgbe 0000:02:00.0: 24:5e:be:1c:bf:ac
[    2.249439] ixgbe 0000:02:00.0: Intel(R) 10 Gigabit Network Connection
[    2.939859] ixgbe 0000:02:00.1: Multiqueue Enabled: Rx Queue count = 24, Tx Queue count = 24 XDP Queue count = 0
[    2.981960] ixgbe 0000:02:00.1: PCI Express bandwidth of 32GT/s available
[    2.981962] ixgbe 0000:02:00.1: (Speed:8.0GT/s, Width: x4, Encoding Loss:<2%)
[    3.010025] ixgbe 0000:02:00.1: MAC: 4, PHY: 0, PBA No: 000500-000
[    3.010026] ixgbe 0000:02:00.1: 24:5e:be:1c:bf:ad
[    3.169311] ixgbe 0000:02:00.1: Intel(R) 10 Gigabit Network Connection
[    3.169882] ixgbe 0000:02:00.1 enp2s0f1: renamed from eth1
[    3.200320] ixgbe 0000:02:00.0 enp2s0f0: renamed from eth0
[   19.813418] ixgbe 0000:02:00.0: registered PHC device on enp2s0f0
[   19.976678] ixgbe 0000:02:00.1 enp2s0f1: changing MTU from 1500 to 9000
[   20.077109] ixgbe 0000:02:00.1: registered PHC device on enp2s0f1
[   25.593362] ixgbe 0000:02:00.0 enp2s0f0: NIC Link is Up 10 Gbps, Flow Control: None
[   25.856143] ixgbe 0000:02:00.1 enp2s0f1: NIC Link is Up 10 Gbps, Flow Control: None
[  382.211822] ixgbe 0000:02:00.0 enp2s0f0: changing MTU from 1500 to 9000
[  388.433628] ixgbe 0000:02:00.0 enp2s0f0: NIC Link is Up 10 Gbps, Flow Control: None
root@x9sri:~#  pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.2-1 (running version: 5.2-1/0fcd7879)
pve-kernel-4.15: 5.2-1
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.13.13-2-pve: 4.13.13-33
ceph: 12.2.5-pve1
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 4.0.2-2
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-31
libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-8
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 0.6-4
openvswitch-switch: 2.7.0-2
proxmox-widget-toolkit: 1.0-18
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-4
pve-firewall: 3.0-8
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-5
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-26
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.8-pve1~bpo9
 
mine works perfect

Could you please send the output for lspci? Atleast for the 10Gbit network ports. Just to compare if we have similar or not.

Mine is:
Code:
25:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
25:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
26:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
26:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
 
In 4.15.17-2-pve I am seeing Adapter Reset messages continually causing our 10G storage network / cluster to be INCREDIBLY unstable. Had to revert back to an older kernel 4.13.13-6-pve to get everything up and running. Would really prefer not to have to maintain building the module from Intel to keep the system running. Suggestions?
 
Could you please send the output for lspci? Atleast for the 10Gbit network ports. Just to compare if we have similar or not.

Mine is:
Code:
25:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
25:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
26:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
26:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

currently my test lab switch out all the ixgbe (intel x520-da2 and x550) and replaced with Mellanox NIC for better driver support. Intel has too many packet loss and super high latency.
 
Code:
81:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)
81:00.1 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)

We did end up having to download the Intel drivers and install them. With the system upgrading to 5.2 as part of this we were seeing other issues running the older kernel with the rest of the upgraded packages. Fortunately this was not that difficult and if it helps I have included my directions below for reference, but of course you need to understand your environment. Assuming no liability, you need english drivers. logged in as root and running the current kernel:

Code:
apt-get install build-essentials pve-headers
cd /usr/src
wget https://downloadmirror.intel.com/14687/eng/ixgbe-5.3.7.tar.gz
tar xzpf ixgbe-5.3.7.tar.gz
cd ixgbe-5.3.7/src
make install
modinfo ixgbe

Although it showed the new kernel driver immediately, I still made sure all VM's were shut down and rebooted the system. Have been running more than a day on this and seems to be stable. Just going to be annoying to have to recompile the driver every time the team releases a new kernel - hopefully they will ship updated drivers!
 
Excellent, is that the 5.3.7 drivers and do you plan to maintain this going forward with new releases?
Yes, it contains the 5.3.7 drivers. Will test it out and see if they work better. When I manually installed the 5.3.6 drivers earlier, I still couldn't get the interfaces started. So I'm hoping 5.3.7 will do the trick.
 
Still failing with 5.3.7 drivers. Not sure if it's the drivers that are not compatible with the network card anymore. Any suggestions or is there anywhere else I can turn to get an answer on what the problem might be?
 
here still failed!

# lspci |grep Eth
......
06:00.0 Ethernet controller: Intel Corporation 82598EB 10-Gigabit AF Dual Port Network Connection (rev 01)
06:00.1 Ethernet controller: Intel Corporation 82598EB 10-Gigabit AF Dual Port Network Connection (rev 01)

Code:
[    1.149071] Intel(R) 10GbE PCI Express Linux Network Driver - version 5.3.7
[    1.149071] Copyright(c) 1999 - 2018 Intel Corporation.
[    1.149377] ixgbe 0000:06:00.0: can't disable ASPM; OS doesn't have ASPM control
[    1.233292] ixgbe 0000:06:00.0: failed to load because an unsupported SFP+ or QSFP module type was detected.
[    1.233356] ixgbe 0000:06:00.0: Reload the driver after installing a supported module.
[    1.233665] ixgbe 0000:06:00.1: can't disable ASPM; OS doesn't have ASPM control
[    1.317368] ixgbe 0000:06:00.1: failed to load because an unsupported SFP+ or QSFP module type was detected.
[    1.317430] ixgbe 0000:06:00.1: Reload the driver after installing a supported module.

# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.17-3-pve)
pve-manager: 5.2-3 (running version: 5.2-3/785ba980)
pve-kernel-4.15: 5.2-3
pve-kernel-4.15.17-3-pve: 4.15.17-13
pve-kernel-4.15.17-1-pve: 4.15.17-9
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-34
libpve-guest-common-perl: 2.0-17
libpve-http-server-perl: 2.0-9
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-1
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-4
pve-firewall: 3.0-12
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-29
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!