pve-kernel-4.15.18-1-pve + IXGBE driver

udo

Distinguished Member
Apr 22, 2009
5,975
196
163
Ahrensburg; Germany
Hi,
I have installed last week two new nodes on an cluster with the kernel 4.15.17-3-pve.

The ixgbe driver is in use.
This evening I saw the update to 4.15.18-1 with the note "drop out-of-tree IXGBE driver".

Must I worry about the nodes, which running the 4.15.17-3 kernel? The hosts are in production and not easy rebootable.

Udo
 
Updated yesterday (while upgrading from 4x1G to 2x10G LACP LAG) with 4.15.18-1. Running so far ; you can have a look at the logs below...

Code:
[    2.022749] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 5.1.0-k
[    2.022750] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[    2.188628] ixgbe 0000:04:00.0: Multiqueue Enabled: Rx Queue count = 12, Tx Queue count = 12 XDP Queue count = 0
[    2.188757] ixgbe 0000:04:00.0: PCI Express bandwidth of 32GT/s available
[    2.188759] ixgbe 0000:04:00.0: (Speed:5.0GT/s, Width: x8, Encoding Loss:20%)
[    2.189084] ixgbe 0000:04:00.0: MAC: 2, PHY: 20, SFP+: 6, PBA No: G42955-016
[    2.189088] ixgbe 0000:04:00.0: 48:df:37:xx:xx:xx
[    2.327448] ixgbe 0000:04:00.0: Intel(R) 10 Gigabit Network Connection
[    2.484734] ixgbe 0000:04:00.1: Multiqueue Enabled: Rx Queue count = 12, Tx Queue count = 12 XDP Queue count = 0
[    2.484861] ixgbe 0000:04:00.1: PCI Express bandwidth of 32GT/s available
[    2.484863] ixgbe 0000:04:00.1: (Speed:5.0GT/s, Width: x8, Encoding Loss:20%)
[    2.485188] ixgbe 0000:04:00.1: MAC: 2, PHY: 20, SFP+: 5, PBA No: G42955-016
[    2.485191] ixgbe 0000:04:00.1: 48:df:37:xx:xx:xx
[    2.488698] ixgbe 0000:04:00.1: Intel(R) 10 Gigabit Network Connection
[    2.492251] ixgbe 0000:04:00.1 eno50: renamed from eth1
[    2.512244] ixgbe 0000:04:00.0 eno49: renamed from eth0
[  174.798152] ixgbe 0000:04:00.0: registered PHC device on eno49
[  174.976071] ixgbe 0000:04:00.0 eno49: detected SFP+: 6
[  175.060409] ixgbe 0000:04:00.1: registered PHC device on eno50
[  175.244087] ixgbe 0000:04:00.0 eno49: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[  175.244974] Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
[  175.264976] ixgbe 0000:04:00.0: removed PHC on eno49
[  175.312061] ixgbe 0000:04:00.1 eno50: detected SFP+: 5
[  175.350471] 8021q: adding VLAN 0 to HW filter on device bond0
[  175.402692] ixgbe 0000:04:00.0: registered PHC device on eno49
[  175.509753] bond0: Enslaving eno49 as a backup interface with a down link
[  175.512948] ixgbe 0000:04:00.1: removed PHC on eno50
[  175.643109] vmbr0: port 1(bond0.1071) entered blocking state
[  175.643112] vmbr0: port 1(bond0.1071) entered disabled state
[  175.643211] device bond0.1071 entered promiscuous mode
[  175.644035] ixgbe 0000:04:00.0 eno49: detected SFP+: 6
[  175.696715] ixgbe 0000:04:00.1: registered PHC device on eno50
[  175.805626] bond0: Enslaving eno50 as a backup interface with a down link
[  175.807512] device bond0 entered promiscuous mode
[  175.908081] ixgbe 0000:04:00.0 eno49: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[  175.976059] ixgbe 0000:04:00.1 eno50: detected SFP+: 5
[  176.000076] bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond
[  176.000126] bond0: link status definitely up for interface eno49, 10000 Mbps full duplex
[  176.000129] bond0: first active interface up!
[  176.000200] vmbr0: port 1(bond0.1071) entered blocking state
[  176.000203] vmbr0: port 1(bond0.1071) entered forwarding state
[  176.026909] vmbr100: port 1(bond0.100) entered blocking state
[  176.026912] vmbr100: port 1(bond0.100) entered disabled state
[  176.026988] device bond0.100 entered promiscuous mode
[  176.036694] vmbr100: port 1(bond0.100) entered blocking state
[  176.036697] vmbr100: port 1(bond0.100) entered forwarding state
[  176.240169] ixgbe 0000:04:00.1 eno50: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[  176.320108] bond0: link status definitely up for interface eno50, 10000 Mbps full duplex
 
Updated yesterday (while upgrading from 4x1G to 2x10G LACP LAG) with 4.15.18-1. Running so far ; you can have a look at the logs below...

Code:
[    2.022749] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 5.1.0-k
[    2.022750] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[    2.188628] ixgbe 0000:04:00.0: Multiqueue Enabled: Rx Queue count = 12, Tx Queue count = 12 XDP Queue count = 0
[    2.188757] ixgbe 0000:04:00.0: PCI Express bandwidth of 32GT/s available
[    2.188759] ixgbe 0000:04:00.0: (Speed:5.0GT/s, Width: x8, Encoding Loss:20%)
[    2.189084] ixgbe 0000:04:00.0: MAC: 2, PHY: 20, SFP+: 6, PBA No: G42955-016
[    2.189088] ixgbe 0000:04:00.0: 48:df:37:xx:xx:xx
[    2.327448] ixgbe 0000:04:00.0: Intel(R) 10 Gigabit Network Connection
[    2.484734] ixgbe 0000:04:00.1: Multiqueue Enabled: Rx Queue count = 12, Tx Queue count = 12 XDP Queue count = 0
[    2.484861] ixgbe 0000:04:00.1: PCI Express bandwidth of 32GT/s available
[    2.484863] ixgbe 0000:04:00.1: (Speed:5.0GT/s, Width: x8, Encoding Loss:20%)
[    2.485188] ixgbe 0000:04:00.1: MAC: 2, PHY: 20, SFP+: 5, PBA No: G42955-016
[    2.485191] ixgbe 0000:04:00.1: 48:df:37:xx:xx:xx
[    2.488698] ixgbe 0000:04:00.1: Intel(R) 10 Gigabit Network Connection
[    2.492251] ixgbe 0000:04:00.1 eno50: renamed from eth1
[    2.512244] ixgbe 0000:04:00.0 eno49: renamed from eth0
[  174.798152] ixgbe 0000:04:00.0: registered PHC device on eno49
[  174.976071] ixgbe 0000:04:00.0 eno49: detected SFP+: 6
[  175.060409] ixgbe 0000:04:00.1: registered PHC device on eno50
[  175.244087] ixgbe 0000:04:00.0 eno49: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[  175.244974] Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
[  175.264976] ixgbe 0000:04:00.0: removed PHC on eno49
[  175.312061] ixgbe 0000:04:00.1 eno50: detected SFP+: 5
[  175.350471] 8021q: adding VLAN 0 to HW filter on device bond0
[  175.402692] ixgbe 0000:04:00.0: registered PHC device on eno49
[  175.509753] bond0: Enslaving eno49 as a backup interface with a down link
[  175.512948] ixgbe 0000:04:00.1: removed PHC on eno50
[  175.643109] vmbr0: port 1(bond0.1071) entered blocking state
[  175.643112] vmbr0: port 1(bond0.1071) entered disabled state
[  175.643211] device bond0.1071 entered promiscuous mode
[  175.644035] ixgbe 0000:04:00.0 eno49: detected SFP+: 6
[  175.696715] ixgbe 0000:04:00.1: registered PHC device on eno50
[  175.805626] bond0: Enslaving eno50 as a backup interface with a down link
[  175.807512] device bond0 entered promiscuous mode
[  175.908081] ixgbe 0000:04:00.0 eno49: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[  175.976059] ixgbe 0000:04:00.1 eno50: detected SFP+: 5
[  176.000076] bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond
[  176.000126] bond0: link status definitely up for interface eno49, 10000 Mbps full duplex
[  176.000129] bond0: first active interface up!
[  176.000200] vmbr0: port 1(bond0.1071) entered blocking state
[  176.000203] vmbr0: port 1(bond0.1071) entered forwarding state
[  176.026909] vmbr100: port 1(bond0.100) entered blocking state
[  176.026912] vmbr100: port 1(bond0.100) entered disabled state
[  176.026988] device bond0.100 entered promiscuous mode
[  176.036694] vmbr100: port 1(bond0.100) entered blocking state
[  176.036697] vmbr100: port 1(bond0.100) entered forwarding state
[  176.240169] ixgbe 0000:04:00.1 eno50: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[  176.320108] bond0: link status definitely up for interface eno50, 10000 Mbps full duplex
Hi,
thanks for the input, but I worry more about the 4.15.17-3 Kernel.

Such a short lifetime in the enterprise repo looks not as best sign...

Udo
 
Indeed, you may worry. I updated and had to boot via 4.15.17-3-pve again.

I notice this but still ixgbe not working.

root@pmxn3:~# update-initramfs -u -v | grep ixgbe
Adding module /lib/modules/4.15.18-1-pve/kernel/drivers/net/ethernet/intel/ixgbevf/ixgbevf.ko
Adding module /lib/modules/4.15.18-1-pve/kernel/drivers/net/ethernet/intel/ixgbe/ixgbe.ko
Adding config /etc/modprobe.d/ixgbe.conf

Best Regards,

Talion

Hi,
I have installed last week two new nodes on an cluster with the kernel 4.15.17-3-pve.

The ixgbe driver is in use.
This evening I saw the update to 4.15.18-1 with the note "drop out-of-tree IXGBE driver".

Must I worry about the nodes, which running the 4.15.17-3 kernel? The hosts are in production and not easy rebootable.

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!