no network connection on 2nd onboard lan X11DPI-N

nethzt

Member
Dec 15, 2020
1
0
6
44
i wanted to activate the 2nd Ethernet Adapter on my Server, the link doesn't go up
i've already tried to make some changes in bios.

dmesg | grep i40e
[ 2.649118] i40e: Intel(R) Ethernet Connection XL710 Network Driver - version 2.8.20-k
[ 2.649119] i40e: Copyright (c) 2013 - 2019 Intel Corporation.
[ 2.665738] i40e 0000:60:00.0: fw 3.1.57069 api 1.5 nvm 3.33 0x80000e48 1.1876.0 [8086:37d1] [15d9:37d1]
[ 2.669678] i40e 0000:60:00.0: MAC address: 3c:ec:ef:00:57:f0
[ 2.669827] i40e 0000:60:00.0: FW LLDP is enabled
[ 2.676830] i40e 0000:60:00.0 eth0: NIC Link is Up, 1000 Mbps Full Duplex, Flow Control: None
[ 2.678089] i40e 0000:60:00.0: Added LAN device PF0 bus=0x60 dev=0x00 func=0x00
[ 2.678608] i40e 0000:60:00.0: Features: PF-id[0] VFs: 32 VSIs: 66 QP: 20 RSS FD_ATR FD_SB NTUPLE DCB VxLAN Geneve PTP VEPA
[ 2.694270] i40e 0000:60:00.1: fw 3.1.57069 api 1.5 nvm 3.33 0x80000e48 1.1876.0 [8086:37d1] [15d9:37d1]
[ 2.697896] i40e 0000:60:00.1: MAC address: 3c:ec:ef:00:57:f1
[ 2.698058] i40e 0000:60:00.1: FW LLDP is enabled
[ 2.705793] i40e 0000:60:00.1: Added LAN device PF1 bus=0x60 dev=0x00 func=0x01
[ 2.706275] i40e 0000:60:00.1: Features: PF-id[1] VFs: 32 VSIs: 66 QP: 20 RSS FD_ATR FD_SB NTUPLE DCB VxLAN Geneve PTP VEPA
[ 2.723363] i40e 0000:60:00.0 eno0: renamed from eth0
[ 2.746167] i40e 0000:60:00.1 enp96s0f1: renamed from eth1
[ 8.603069] i40e 0000:60:00.0: eno0 is entering allmulti mode.

ethtool enp96s0f1
Settings for enp96s0f1:
Supported ports: [ ]
Supported link modes: 1000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: Unknown!
Duplex: Unknown! (255)
Port: Other
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: g
Wake-on: g
Current message level: 0x00000007 (7)
drv probe link
Link detected: no

also tried
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
and
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

# ifconfig enp96s0f1 down
# ifconfig enp96s0f1 up
Dec 15 00:30:13 kernel: [ 1006.387496] irq 129: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387508] irq 130: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387521] irq 131: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387532] irq 132: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387544] irq 133: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387556] irq 134: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387567] irq 135: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387580] irq 136: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387592] irq 137: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387603] irq 138: Affinity broken due to vector space exhaustion.


no chance. any ideas ?
 
i wanted to activate the 2nd Ethernet Adapter on my Server, the link doesn't go up
i've already tried to make some changes in bios.

dmesg | grep i40e
[ 2.649118] i40e: Intel(R) Ethernet Connection XL710 Network Driver - version 2.8.20-k
[ 2.649119] i40e: Copyright (c) 2013 - 2019 Intel Corporation.
[ 2.665738] i40e 0000:60:00.0: fw 3.1.57069 api 1.5 nvm 3.33 0x80000e48 1.1876.0 [8086:37d1] [15d9:37d1]
[ 2.669678] i40e 0000:60:00.0: MAC address: 3c:ec:ef:00:57:f0
[ 2.669827] i40e 0000:60:00.0: FW LLDP is enabled
[ 2.676830] i40e 0000:60:00.0 eth0: NIC Link is Up, 1000 Mbps Full Duplex, Flow Control: None
[ 2.678089] i40e 0000:60:00.0: Added LAN device PF0 bus=0x60 dev=0x00 func=0x00
[ 2.678608] i40e 0000:60:00.0: Features: PF-id[0] VFs: 32 VSIs: 66 QP: 20 RSS FD_ATR FD_SB NTUPLE DCB VxLAN Geneve PTP VEPA
[ 2.694270] i40e 0000:60:00.1: fw 3.1.57069 api 1.5 nvm 3.33 0x80000e48 1.1876.0 [8086:37d1] [15d9:37d1]
[ 2.697896] i40e 0000:60:00.1: MAC address: 3c:ec:ef:00:57:f1
[ 2.698058] i40e 0000:60:00.1: FW LLDP is enabled
[ 2.705793] i40e 0000:60:00.1: Added LAN device PF1 bus=0x60 dev=0x00 func=0x01
[ 2.706275] i40e 0000:60:00.1: Features: PF-id[1] VFs: 32 VSIs: 66 QP: 20 RSS FD_ATR FD_SB NTUPLE DCB VxLAN Geneve PTP VEPA
[ 2.723363] i40e 0000:60:00.0 eno0: renamed from eth0
[ 2.746167] i40e 0000:60:00.1 enp96s0f1: renamed from eth1
[ 8.603069] i40e 0000:60:00.0: eno0 is entering allmulti mode.

ethtool enp96s0f1
Settings for enp96s0f1:
Supported ports: [ ]
Supported link modes: 1000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: Unknown!
Duplex: Unknown! (255)
Port: Other
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: g
Wake-on: g
Current message level: 0x00000007 (7)
drv probe link
Link detected: no

also tried
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
and
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

# ifconfig enp96s0f1 down
# ifconfig enp96s0f1 up
Dec 15 00:30:13 kernel: [ 1006.387496] irq 129: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387508] irq 130: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387521] irq 131: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387532] irq 132: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387544] irq 133: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387556] irq 134: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387567] irq 135: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387580] irq 136: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387592] irq 137: Affinity broken due to vector space exhaustion.
Dec 15 00:30:13 kernel: [ 1006.387603] irq 138: Affinity broken due to vector space exhaustion.


no chance. any ideas ?
For test: exchange the cables between th e2 NICs and check if the problem remains at the NIC. If
- yes: NIC defect
- no: cable defect
 
After update from pve 6.2 to pve-manager/6.3-3/eee5f901 (running kernel: 5.4.78-2-pve) i had the same problem - 2nd link was down on pve host. Links are managed by openvswitch in LACP 802.3ad mode. Switch side showed both links up and LACPed.q

Code:
Feb 15 10:38:00 pve-backup-1 kernel: [ 1391.908617] irq 172: Affinity broken due to vector space exhaustion.
Feb 15 10:38:00 pve-backup-1 kernel: [ 1391.908641] irq 173: Affinity broken due to vector space exhaustion.
Feb 15 10:38:00 pve-backup-1 kernel: [ 1391.908659] irq 174: Affinity broken due to vector space exhaustion.
Feb 15 10:38:00 pve-backup-1 kernel: [ 1391.908678] irq 175: Affinity broken due to vector space exhaustion.
Feb 15 10:38:00 pve-backup-1 kernel: [ 1391.908697] irq 176: Affinity broken due to vector space exhaustion.
Feb 15 10:38:00 pve-backup-1 kernel: [ 1391.908717] irq 177: Affinity broken due to vector space exhaustion.
Feb 15 10:38:00 pve-backup-1 kernel: [ 1391.908818] irq 184: Affinity broken due to vector space exhaustion.
Feb 15 10:38:00 pve-backup-1 kernel: [ 1391.908831] irq 185: Affinity broken due to vector space exhaustion.
Feb 15 10:38:00 pve-backup-1 kernel: [ 1391.908843] irq 186: Affinity broken due to vector space exhaustion.
Feb 15 10:38:00 pve-backup-1 kernel: [ 1391.908856] irq 187: Affinity broken due to vector space exhaustion.
Feb 15 10:38:00 pve-backup-1 kernel: [ 1391.908878] irq 188: Affinity broken due to vector space exhaustion.
Feb 15 10:38:00 pve-backup-1 kernel: [ 1391.908891] irq 189: Affinity broken due to vector space exhaustion.

Feb 15 10:40:10 pve-backup-1 kernel: [ 1522.460155] i40e 0000:08:00.1 ens1f1: NIC Link is Up, 10 Gbps Full Duplex, Flow Control: None
Feb 15 10:40:10 pve-backup-1 kernel: [ 1522.511819] irq 172: Affinity broken due to vector space exhaustion.
Feb 15 10:40:10 pve-backup-1 kernel: [ 1522.511829] irq 173: Affinity broken due to vector space exhaustion.
Feb 15 10:40:10 pve-backup-1 kernel: [ 1522.511838] irq 174: Affinity broken due to vector space exhaustion.
Feb 15 10:40:10 pve-backup-1 kernel: [ 1522.511847] irq 175: Affinity broken due to vector space exhaustion.
Feb 15 10:40:10 pve-backup-1 kernel: [ 1522.511855] irq 176: Affinity broken due to vector space exhaustion.
Feb 15 10:40:10 pve-backup-1 kernel: [ 1522.511864] irq 177: Affinity broken due to vector space exhaustion.
Feb 15 10:40:10 pve-backup-1 kernel: [ 1522.511916] irq 184: Affinity broken due to vector space exhaustion.
Feb 15 10:40:10 pve-backup-1 kernel: [ 1522.511924] irq 185: Affinity broken due to vector space exhaustion.
Feb 15 10:40:10 pve-backup-1 kernel: [ 1522.511933] irq 186: Affinity broken due to vector space exhaustion.
Feb 15 10:40:10 pve-backup-1 kernel: [ 1522.511941] irq 187: Affinity broken due to vector space exhaustion.
Feb 15 10:40:10 pve-backup-1 kernel: [ 1522.511950] irq 188: Affinity broken due to vector space exhaustion.
Feb 15 10:40:10 pve-backup-1 kernel: [ 1522.511959] irq 189: Affinity broken due to vector space exhaustion.
Feb 15 10:40:10 pve-backup-1 kernel: [ 1522.512173] IPv6: ADDRCONF(NETDEV_CHANGE): ens1f1: link becomes ready
Feb 15 10:40:55 pve-backup-1 kernel: [ 1566.752487] mpt3sas 0000:07:00.0: invalid short VPD tag 00 at offset 1

After some trying to link set to down/up the 2nd link finally come up - but can't say what triggered it.

Server is DL380E G8, 2x E5-2430, X710-DA2, 4 VMs running.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!