Replace Network Card, card unclaimed, how to fix

pille99

Active Member
Sep 14, 2022
360
28
28
hello all
i replaced 1gb dual intel with 10gb dual intel
the card doesnt show up in debian. i have now 3x10gb NICs. one card is loaded. the dual NIC not. i cant get it working

root@name:~# lshw -c network
*-network
description: Ethernet interface
product: 82599ES 10-Gigabit SFI/SFP+ Network Connection
vendor: Intel Corporation
physical id: 0
bus info: pci@0000:01:00.0
logical name: enp1s0
version: 01
serial: 6c:b3:11:0a:35:f6
size: 10Gbit/s
capacity: 10Gbit/s
width: 64 bits
clock: 33MHz
capabilities: pm msi msix pciexpress vpd bus_master cap_list rom ethernet physical fibre 10000bt-fd
configuration: autonegotiation=off broadcast=yes driver=ixgbe driverversion=5.15.60-2-pve duplex=full firm ware=0x00012b2c latency=0 link=yes multicast=yes speed=10Gbit/s
resources: irq:43 memory:c0000000-c007ffff ioport:f000(size=32) memory:c0100000-c0103fff memory:c0080000-c 00fffff memory:c0104000-c0203fff memory:c0204000-c0303fff
*-network:0 UNCLAIMED
description: Ethernet controller
product: 82599ES 10-Gigabit SFI/SFP+ Network Connection
vendor: Intel Corporation
physical id: 0
bus info: pci@0000:21:00.0
version: 01
width: 64 bits
clock: 33MHz
capabilities: pm msi msix pciexpress vpd cap_list
configuration: latency=0
resources: ioport:e020(size=32)
*-network:1 UNCLAIMED
description: Ethernet controller
product: 82599ES 10-Gigabit SFI/SFP+ Network Connection
vendor: Intel Corporation
physical id: 0.1
bus info: pci@0000:21:00.1
version: 01
width: 64 bits
clock: 33MHz
capabilities: pm msi msix pciexpress vpd cap_list
configuration: latency=0
resources: ioport:e000(size=32)

root@name:~# lshw -class network -short
H/W path Device Class Description
==========================================================
/0/100/1.1/0 enp1s0 network 82599ES 10-Gigabit SFI/SFP+ Network Connection
/0/100/1.2/0.2/0/0 network 82599ES 10-Gigabit SFI/SFP+ Network Connection ----------------------- no name
/0/100/1.2/0.2/0/0.1 network 82599ES 10-Gigabit SFI/SFP+ Network Connection ---------------------- no name
/0/100/1.2/0.2/8/0 enp41s0 network I210 Gigabit Network Connection

root@name:~# dmesg | grep ixgbe
[ 1.569123] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver
[ 1.569124] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[ 1.743689] ixgbe 0000:01:00.0: Multiqueue Enabled: Rx Queue count = 32, Tx Queue count = 32 XDP Queue count = 0
[ 1.743983] ixgbe 0000:01:00.0: 16.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x4 link at 0000:00:01.1 (capable of 32.000 Gb/s with 5.0 GT/s PCIe x8 link)
[ 1.744108] ixgbe 0000:01:00.0: MAC: 2, PHY: 14, SFP+: 3, PBA No: Unknown
[ 1.744109] ixgbe 0000:01:00.0: 6c:b3:11:0a:35:f6
[ 1.744966] ixgbe 0000:01:00.0: Intel(R) 10 Gigabit Network Connection
[ 1.745195] ixgbe: probe of 0000:21:00.0 failed with error -5 -------------------------------------------------------------------------- here is the issue
[ 1.745321] ixgbe: probe of 0000:21:00.1 failed with error -5 --------------------------------------------------------------------------- here is the issue (second port of the dual card)
[ 4.210790] ixgbe 0000:01:00.0 enp1s0: renamed from eth1
[ 7.682126] ixgbe 0000:01:00.0: registered PHC device on enp1s0
[ 7.866722] ixgbe 0000:01:00.0 enp1s0: detected SFP+: 3
[ 8.102785] ixgbe 0000:01:00.0 enp1s0: NIC Link is Up 10 Gbps, Flow Control: RX/TX

any suggestion ?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!