Miniforum MS1 - Proxmox 8.2.9 - 10Gbe SFP+ link down

lla4u

New Member
Nov 20, 2024
3
0
1
I am using PVE 8.2.9 cluster within two MS1 boxes.

I am new at Proxmox.

The issue is Network refuse to work with 10Gb copper SFP
Any help, guidance will be very appreciated :)


Here is my network interfaces config:
...
auto enp2s0f1np1
iface enp2s0f1np1 inet manual

auto vmbr1
iface vmbr1 inet static
address 10.10.10.3/24
bridge-ports enp2s0f1np1
bridge-stp off
bridge-fd 0
...

Looking at ip link:
...
5: enp2s0f1np1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq master vmbr1 state DOWN mode DEFAULT group default qlen 1000
link/ether 58:47:ca:76:99:9c brd ff:ff:ff:ff:ff:ff
...
9: vmbr1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
link/ether 58:47:ca:76:99:9c brd ff:ff:ff:ff:ff:ff
...

all are DOWN

Looking at pci:

root@pve01:~# lspci -k | sed -n '/Ethernet/,/driver in use/p'
02:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
Subsystem: Intel Corporation Ethernet Converged Network Adapter X710
Kernel driver in use: i40e
02:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 02)
Subsystem: Intel Corporation Ethernet Converged Network Adapter X710
Kernel driver in use: i40e
...

Looking at dmesg for i40e driver:
root@pve01:~# dmesg | grep i40e
[ 0.947970] i40e: Intel(R) Ethernet Connection XL710 Network Driver
[ 0.947972] i40e: Copyright (c) 2013 - 2019 Intel Corporation.
[ 0.947996] i40e 0000:02:00.0: enabling device (0000 -> 0002)
[ 0.961180] i40e 0000:02:00.0: fw 9.120.73026 api 1.15 nvm 9.20 0x8000d8c5 0.0.0 [8086:1572] [8086:0000]
[ 1.217513] i40e 0000:02:00.0: MAC address: 58:47:ca:76:99:9b
[ 1.217751] i40e 0000:02:00.0: FW LLDP is enabled
[ 1.222043] i40e 0000:02:00.0: PCI-Express: Speed 8.0GT/s Width x4
[ 1.222045] i40e 0000:02:00.0: PCI-Express bandwidth available for this device may be insufficient for optimal performance.
[ 1.222045] i40e 0000:02:00.0: Please move the device to a different PCI-e link with more lanes and/or higher transfer rate.
[ 1.222493] i40e 0000:02:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 20 RSS FD_ATR FD_SB NTUPLE DCB VxLAN Geneve PTP VEPA
[ 1.222508] i40e 0000:02:00.1: enabling device (0000 -> 0002)
[ 1.235247] i40e 0000:02:00.1: fw 9.120.73026 api 1.15 nvm 9.20 0x8000d8c5 0.0.0 [8086:1572] [8086:0000]
[ 1.491902] i40e 0000:02:00.1: MAC address: 58:47:ca:76:99:9c
[ 1.492137] i40e 0000:02:00.1: FW LLDP is enabled
[ 1.496231] i40e 0000:02:00.1: PCI-Express: Speed 8.0GT/s Width x4
[ 1.496232] i40e 0000:02:00.1: PCI-Express bandwidth available for this device may be insufficient for optimal performance.
[ 1.496232] i40e 0000:02:00.1: Please move the device to a different PCI-e link with more lanes and/or higher transfer rate.
[ 1.496681] i40e 0000:02:00.1: Features: PF-id[1] VFs: 64 VSIs: 66 QP: 20 RSS FD_ATR FD_SB NTUPLE DCB VxLAN Geneve PTP VEPA
[ 1.511451] i40e 0000:02:00.1 enp2s0f1np1: renamed from eth1
[ 1.524431] i40e 0000:02:00.0 enp2s0f0np0: renamed from eth0
[73196.398723] i40e 0000:02:00.0 enp2s0f0np0: NIC Link is Up, 10 Gbps Full Duplex, Flow Control: None
[75927.833207] i40e 0000:02:00.0 enp2s0f0np0: NIC Link is Down
[77987.210760] i40e 0000:02:00.0 enp2s0f0np0: NIC Link is Up, 10 Gbps Full Duplex, Flow Control: None
[78600.936772] i40e 0000:02:00.1 enp2s0f1np1: entered allmulticast mode
[78600.936821] i40e 0000:02:00.1 enp2s0f1np1: entered promiscuous mode
[78600.939775] i40e 0000:02:00.1: entering allmulti mode.

looking at ethtool 10Gbe SFP:
root@pve01:~# ethtool enp2s0f1np1
Settings for enp2s0f1np1:
Supported ports: [ ]
Supported link modes: 10000baseT/Full
1000baseX/Full
10000baseSR/Full
10000baseLR/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 10000baseT/Full
1000baseX/Full
10000baseSR/Full
10000baseLR/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: Unknown!
Duplex: Unknown! (255)
Auto-negotiation: off
Port: Other
PHYAD: 0
Transceiver: internal
Supports Wake-on: g
Wake-on: g
Current message level: 0x00000007 (7)
drv probe link
Link detected: no

No error at all but working. I exhausted all my Linux knowledge on that subject.


I also tried other SFP+ from FS but same issue, no link


Thanks in advance.
 
Try to disable lldp engine:

Code:
for i in $(lshw -c network -businfo | grep X710 | awk '{print $2}')
do
    ethtool --set-priv-flags $i disable-fw-lldp on
done

whats the output from: ethtool -m enp2s0f1np1
 
Hi,

Thanks,

installed lshw and run the loop

root@pve01:~# ethtool -m enp2s0f1np1
netlink error: Invalid argument

ethtool on unplugged SFP:
root@pve01:~# ethtool enp2s0f0np0
Settings for enp2s0f0np0:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseX/Full
10000baseSR/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 1000baseX/Full
10000baseSR/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Auto-negotiation: off
Port: FIBRE
PHYAD: 0
Transceiver: internal
Supports Wake-on: g
Wake-on: g
Current message level: 0x00000007 (7)
drv probe link
Link detected: no

ethtool on plugged SFP:
root@pve01:~# ethtool enp2s0f1np1
Settings for enp2s0f1np1:
Supported ports: [ ]
Supported link modes: 10000baseT/Full
1000baseX/Full
10000baseSR/Full
10000baseLR/Full
Supported pause frame use: Symmetric Receive-only
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 10000baseT/Full
1000baseX/Full
10000baseSR/Full
10000baseLR/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: Unknown!
Duplex: Unknown! (255)
Auto-negotiation: off
Port: Other
PHYAD: 0
Transceiver: internal
Supports Wake-on: g
Wake-on: g
Current message level: 0x00000007 (7)
drv probe link
Link detected: no

Supported link modes & Advertised link modes are differents

Laurent
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!