Linux Kernel 5.4 for Proxmox VE

Has anyone used the 5.4 kernel for in-tree support for the *new* Intel x710 NICs released 2019 using the i40e module? I had to build the module from the intel source package for pve-kernel-5.3 as it does not support these new cards (https://ark.intel.com/content/www/u.../intel-ethernet-network-adapter-x710-t2l.html for example).

There are things that seem to indicate drivers are included and working but some confirmed uses would be nice.

I'm on the latest PVE version with a 5.4 kernel and I can confirm these cards (X710-T2L) do not work OOTB!
 
Last edited:
  • Like
Reactions: Bengt Nolin
Hmmm - today (well, ~17 hours ago), it seems my proxmox server rebooted itself.

It's running:
Linux proxmox 5.4.34-1-pve #1 SMP PVE 5.4.34-2 (Thu, 07 May 2020 10:02:02 +0200) x86_64

Doesn't seem to be anything in /var/log that seems to indicate why the system reset, only that it came back up and all the VMs started ok...
 
I'm on the latest PVE version with a 5.4 kernel and I can confirm these these cards do not work OOTB!

It's enabled as a module - is this the right driver?

Code:
# grep -i i40e /boot/config-5.4.34-1-pve
CONFIG_I40E=m
CONFIG_I40E_DCB=y
CONFIG_I40EVF=m
 
It's enabled as a module - is this the right driver?

Code:
# grep -i i40e /boot/config-5.4.34-1-pve
CONFIG_I40E=m
CONFIG_I40E_DCB=y
CONFIG_I40EVF=m
Correct. It seems the i40e module is compiled from version 2.8.20. This works with the Intel X710-DA2 (which I also have)
Support for the X710-T2L was first added in 2.8.43, released in 6/5/2019. Seems Bengt Nolin was correct in that this module
needs recompiling before these cards can be used.
 
Correct. It seems the i40e module is compiled from version 2.8.20.

Correct:
Code:
[63241.054338] i40e: Intel(R) Ethernet Connection XL710 Network Driver - version 2.8.20-k
[63241.054340] i40e: Copyright (c) 2013 - 2019 Intel Corporation.

I'd guess you'll have to rebuild the newer module, or wait until it gets shipped into the upstream kernel as a newer version.
 
  • Like
Reactions: elmo
Hi,

I'm having issues with the new kernel.

Once I run apt-get upgrade I get the following error:

Code:
dpkg: error processing package proxmox-ve (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
 pve-kernel-5.4.34-1-pve
 pve-kernel-5.4
 proxmox-ve
E: Sub-process /usr/bin/dpkg returned an error code (1)

If I try to repair the installation...

Code:
root@pve2020:~# apt-get install --reinstall pve-kernel-5.4.34-1-pve
Reading package lists... Done
Building dependency tree
Reading state information... Done
0 upgraded, 0 newly installed, 1 reinstalled, 0 to remove and 0 not upgraded.
3 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
E: Internal Error, No file name for pve-kernel-5.4.34-1-pve:amd64

pveversion output
Code:
pve-manager/6.2-4/9824574a (running kernel: 5.3.18-3-pve)

Server has been rebooted a couple of times, but the error stays there

It's a brand new server (one month running) with the following HW:
  • Ryzen 5 1600 AF
  • AsRock X570 Pro4
  • 32 Gb RAM
Any ideas how to solve?

Thanks!
 
Correct. It seems the i40e module is compiled from version 2.8.20. This works with the Intel X710-DA2 (which I also have)
Support for the X710-T2L was first added in 2.8.43, released in 6/5/2019. Seems Bengt Nolin was correct in that this module
needs recompiling before these cards can be used.

That's a bummer! Thanks for sharing your experience of these cards though.
 
  • Like
Reactions: elmo
Did you run an apt-get dist-upgrade? Never run apt-get upgrade in PVE!

Yes, that's what I always run. To reproduce the error I ran apt-get upgrade

The results, are the following:

Code:
root@pve2020:~# apt-get dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
3 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n]
Setting up pve-kernel-5.4.34-1-pve (5.4.34-2) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 5.4.34-1-pve /boot/vmlinuz-5.4.34-1-pve
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 5.4.34-1-pve /boot/vmlinuz-5.4.34-1-pve
update-initramfs: Generating /boot/initrd.img-5.4.34-1-pve
run-parts: executing /etc/kernel/postinst.d/pve-auto-removal 5.4.34-1-pve /boot/vmlinuz-5.4.34-1-pve
run-parts: executing /etc/kernel/postinst.d/zz-pve-efiboot 5.4.34-1-pve /boot/vmlinuz-5.4.34-1-pve
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 5.4.34-1-pve /boot/vmlinuz-5.4.34-1-pve
/usr/sbin/grub-mkconfig: 38: /etc/default/grub: net.ifnames=0: not found
run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code 127
Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/pve-kernel-5.4.34-1-pve.postinst line 19.
dpkg: error processing package pve-kernel-5.4.34-1-pve (--configure):
 installed pve-kernel-5.4.34-1-pve package post-installation script subprocess returned error exit status 2
dpkg: dependency problems prevent configuration of pve-kernel-5.4:
 pve-kernel-5.4 depends on pve-kernel-5.4.34-1-pve; however:
  Package pve-kernel-5.4.34-1-pve is not configured yet.

dpkg: error processing package pve-kernel-5.4 (--configure):
 dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of proxmox-ve:
 proxmox-ve depends on pve-kernel-5.4; however:
  Package pve-kernel-5.4 is not configured yet.

dpkg: error processing package proxmox-ve (--configure):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 pve-kernel-5.4.34-1-pve
 pve-kernel-5.4
 proxmox-ve
E: Sub-process /usr/bin/dpkg returned an error code (1)
 
/usr/sbin/grub-mkconfig: 38: /etc/default/grub: net.ifnames=0: not found

sounds like your /etc/default/grub contains some invalid line(s)?
 
sounds like your /etc/default/grub contains some invalid line(s)?
Yes, that was it.

On a previous reboot, proxmox had changed all the network adapters id, so no VM was working. I read it was a debian issue and that by adding #net.ifnames=0 to /etc/default/grub the issue was solved...

Commented the line and the upgrade has worked without issues

Thanks for the help!
 
Seems to work fine..

So please open a new thread and specify what you changed in your setup that makes it act like this, as it def. works on a default setup from our installer here, more than one re-checked, so if you really do not forgot a step then it has to be something out of the ordinary to not work.. You can use cat /sys/module/zfs/parameters/zfs_arc_max to check if the module parameter is actually in effect.

Maybe ZFS version changed between these kernels? You can check it eg. using modinfo zfs | head
And report problems to https://github.com/openzfs/zfs/issues

I'm not sure at this moment, but I guess, maybe. it was an issue with incorrect repository connection:

Code:
nano /etc/apt/sources.list.d/pve-enterprise.list

Code:
deb http://download.proxmox.com/debian/pve buster pve-no-subscription

We can see here OS version is "buster". I guess, in my previous PVE installation it's maybe has been copy & pasted wrong name of repository, for example "jessie". I am suppose due this typo, some packages could be wrong updated.

Last VE Proxmox 6.2-4 - ZFS RAM works fine.
 
  • Like
Reactions: t.lamprecht
Correct:
Code:
[63241.054338] i40e: Intel(R) Ethernet Connection XL710 Network Driver - version 2.8.20-k
[63241.054340] i40e: Copyright (c) 2013 - 2019 Intel Corporation.

I'd guess you'll have to rebuild the newer module, or wait until it gets shipped into the upstream kernel as a newer version.

So, I did a 'dist-upgrade' today on a brand new system, featuring the same cards I've been using for a while. Low and behold - both the X710-DA2 AND the X710-T2L works OOTB with the latest kernel (5.4.65-1)! However, the i40e module still seems to be compiled from Intel driver 2.8.20:

root@hostname:~# ethtool -i ens6f0 (this is a X710-DA2, which as always worked)
driver: i40e
version: 2.8.20-k
firmware-version: 8.00 0x80008b6a 1.2766.0
expansion-rom-version:
bus-info: 0000:82:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

root@hostname:~# ethtool -i ens2f0 (this is a the X710-T2L, which I previously had to recompile the i40e module for it to work)
driver: i40e
version: 2.8.20-k
firmware-version: 8.00 0x80008c83 1.2766.0
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

Can someone explain to me how the 'T2L'-card is now working using the same driver module that did NOT previously work with the older kernel!?
 
Can someone explain to me how the 'T2L'-card is now working using the same driver module that did NOT previously work with the older kernel!?

Possible two things:
* It's not only the driver which has an impact on how/if a NIC works, lots other stuff can have (PCIe bus subsystem, network subsystem, ...)
* they do not bump the driver version for every patch applied, especially if stuff is backported - so the same version on another kernel does not necessarily mean that it is the exact same thing.

Also, you say brand-new system, so the servers (boot) firmware and IO chipsets may also have a say in this.
 
Possible two things:
* It's not only the driver which has an impact on how/if a NIC works, lots other stuff can have (PCIe bus subsystem, network subsystem, ...)
* they do not bump the driver version for every patch applied, especially if stuff is backported - so the same version on another kernel does not necessarily mean that it is the exact same thing.

Also, you say brand-new system, so the servers (boot) firmware and IO chipsets may also have a say in this.

Correction: The server is the same as I've always had. I meant brand new installation. My bad. The i40e kernel module has been discussed before:
https://forum.proxmox.com/threads/linux-kernel-5-4-for-proxmox-ve.66854/post-313923

Hence the same system with the same firmware revisions across the board but with a brand new installation now has a working X710-T2L NIC.
What's strange is that support for the X710-T2L NIC's was introduced in version 2.8.43 (according to Intel) but the module still reports 2.8.20.
 
What's strange is that support for the X710-T2L NIC's was introduced in version 2.8.43 (according to Intel) but the module still reports 2.8.20.

As said, backports to stable or LTS kernels may add fixes and even new support but not bump the modules driver version (sometimes).

I just checked for your case, and it seems you got lucky, the support for this was commited to the Ubuntu LTS kernel we base off just 12 days ago and got in with the last update:
i40e: enable X710 support
-- https://git.proxmox.com/?p=mirror_u...it;h=ccce2f994c4de3325e0b9fcc8d97f0f8afb986a6
 
  • Like
Reactions: elmo

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!