QLogic 10G Dual Port Adapter

randy404

Member
Feb 24, 2021
7
0
6
57
We have a ProxMox Cluster running PM VE-6.3-3. The two servers in the Cluster are Dell R620's with 4 x 1G and 2 x 10G onboard NICs. All of these onboard ports are being recognized by PM.

In each of the devices we've also added a QLogic Dual-Port 10G PCI Adapter. The adapter is being seen when running LSPCI...

Code:
41:00.0 Ethernet controller: QLogic Corp. cLOM8214 1/10GbE Controller (rev 58)
41:00.1 Ethernet controller: QLogic Corp. cLOM8214 1/10GbE Controller (rev 58)

However, ProxMox is not seeing the QLogic Adapter. We've tried to manually add the driver, but Debian wants to overwrite/uninstall the ProxMox driver.

Has anyone encountered this issue, and if so, what did you do to resolve the issue.

Thank you in advance. Your assistance and expertise is appreciated.
 
Hi!

Proxmox VE's pve-firmware packages ships all firmware files from the linux-firmware project in a newer state, so installing the Debian version does not add new ones.

Do you have the pve-firmware package installed in the newest version (at time of writing) 3.2-2?

Can you post the output of the following commands:
Bash:
ip link
lspci -s 41:00 -knn
lsmod | grep qlcnic
 
Thanks for the response.

Currently installed pve-firmware: 3.1-1

Code:
# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 24:6e:96:01:cb:14 brd ff:ff:ff:ff:ff:ff
3: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 24:6e:96:01:cb:15 brd ff:ff:ff:ff:ff:ff
4: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 24:6e:96:01:cb:10 brd ff:ff:ff:ff:ff:ff
5: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 24:6e:96:01:cb:12 brd ff:ff:ff:ff:ff:ff
6: enp65s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:0e:1e:0c:7a:18 brd ff:ff:ff:ff:ff:ff
7: enp65s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:0e:1e:0c:7a:19 brd ff:ff:ff:ff:ff:ff
8: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 24:6e:96:01:cb:14 brd ff:ff:ff:ff:ff:ff

Code:
# lspci -s 41:00 -knn
41:00.0 Ethernet controller [0200]: QLogic Corp. cLOM8214 1/10GbE Controller [1077:8020] (rev 58)
        Subsystem: QLogic Corp. cLOM8214 1/10GbE Controller [1077:0228]
        Kernel driver in use: qlcnic
        Kernel modules: qlcnic
41:00.1 Ethernet controller [0200]: QLogic Corp. cLOM8214 1/10GbE Controller [1077:8020] (rev 58)
        Subsystem: QLogic Corp. cLOM8214 1/10GbE Controller [1077:0228]
        Kernel driver in use: qlcnic
        Kernel modules: qlcnic

Code:
# lsmod | grep qlcnic
qlcnic                315392  0
 
QLogic Corp. cLOM8214 1/10GbE Controller [1077:0228]

OK, that model actually should use the qla2xxx module, is that loaded?
Bash:
lsmod | grep qla2xxx

Can you also check the kernel messages for any error dmesg.

Bash:
dmesg | grep -P 'qla2xxx|qlcnic'
 
Doesn't look like I have that module loaded... 'lsmod | grep qla2xxx' returns nothing.

Here's the output of dmesg...
Code:
root@hvcb14:/# dmesg | grep -P 'qla2xxx|qlcnic'
[    2.845119] qlcnic 0000:41:00.0: 2048KB memory map
[    3.454873] qlcnic 0000:41:00.0: Default minidump capture mask 0x1f
[    3.454877] qlcnic 0000:41:00.0: FW dump enabled
[    3.454879] qlcnic 0000:41:00.0: Supports FW dump capability
[    3.454883] qlcnic 0000:41:00.0: Driver v5.3.66, firmware v4.18.4
[    3.485775] qlcnic: 00:0e:1e:0c:7a:18 Gigabit Ethernet Board Chip rev 0x58
[    3.641327] qlcnic 0000:41:00.0: using msi-x interrupts
[    3.743956] qlcnic 0000:41:00.0: eth0: XGbE port initialized
[    3.744404] qlcnic 0000:41:00.1: 2048KB memory map
[    3.797467] qlcnic 0000:41:00.1: Default minidump capture mask 0x1f
[    3.797470] qlcnic 0000:41:00.1: FW dump enabled
[    3.797472] qlcnic 0000:41:00.1: Supports FW dump capability
[    3.797476] qlcnic 0000:41:00.1: Driver v5.3.66, firmware v4.18.4
[    5.402762] qlcnic 0000:41:00.1: using msi-x interrupts
[    5.402889] qlcnic 0000:41:00.1: eth1: XGbE port initialized
[    5.405066] qlcnic 0000:41:00.0 enp65s0f0: renamed from eth0
[    5.431758] qlcnic 0000:41:00.1 enp65s0f1: renamed from eth1
 
Hmm, so first I thought the enp65s0 ones were your onboard 10 G NICs, connected through PCIe, but does not seems to be the case, and they are in fact the qlogic NICs and do show up?

What's the ethtool info on this device? (you may need to install that package if not already done)

Bash:
ethtool -i enp65s0f0
 
That's odd, as I thought the same.

Code:
# ethtool -i enp65s0f0
driver: qlcnic
version: 5.3.66
firmware-version: 4.18.4
expansion-rom-version:
bus-info: 0000:41:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: no
 
Just in case it helps, here is the info for the other 10g ports

Code:
# lspci -s 1:00 -knn
01:00.0 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb] (rev 01)
        Subsystem: Dell Ethernet 10G 4P X520/I350 rNDC [1028:1f72]
        Kernel driver in use: ixgbe
        Kernel modules: ixgbe
01:00.1 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb] (rev 01)
        Subsystem: Dell Ethernet 10G 4P X520/I350 rNDC [1028:1f72]
        Kernel driver in use: ixgbe
        Kernel modules: ixgbe

Code:
# lsmod | grep ixgbe
ixgbe                 344064  0
xfrm_algo              16384  1 ixgbe
dca                    16384  2 igb,ixgbe
mdio                   16384  1 ixgbe

Code:
# dmesg | grep ixgbe
[    2.840099] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 5.1.0-k
[    2.840100] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[    3.017278] ixgbe 0000:01:00.0: Multiqueue Enabled: Rx Queue count = 32, Tx Queue count = 32 XDP Queue count = 0
[    3.017634] ixgbe 0000:01:00.0: 32.000 Gb/s available PCIe bandwidth (5 GT/s x8 link)
[    3.017978] ixgbe 0000:01:00.0: MAC: 2, PHY: 14, SFP+: 4, PBA No: G61346-014
[    3.017989] ixgbe 0000:01:00.0: 24:6e:96:01:cb:10
[    3.055268] ixgbe 0000:01:00.0: Intel(R) 10 Gigabit Network Connection
[    3.055342] libphy: ixgbe-mdio: probed
[    3.229047] ixgbe 0000:01:00.1: Multiqueue Enabled: Rx Queue count = 32, Tx Queue count = 32 XDP Queue count = 0
[    3.229409] ixgbe 0000:01:00.1: 32.000 Gb/s available PCIe bandwidth (5 GT/s x8 link)
[    3.229745] ixgbe 0000:01:00.1: MAC: 2, PHY: 14, SFP+: 3, PBA No: G61346-014
[    3.229747] ixgbe 0000:01:00.1: 24:6e:96:01:cb:12
[    3.234112] ixgbe 0000:01:00.1: Intel(R) 10 Gigabit Network Connection
[    3.234232] libphy: ixgbe-mdio: probed
[    3.494458] ixgbe 0000:01:00.0 eno1: renamed from eth0
[    3.588125] ixgbe 0000:01:00.1 eno2: renamed from eth1
 
Same issue here, weird thing is i have 2 nodes where one of them can properly use the qlcnic drivers and the other cant, with identical cards.
Same version of pve-firmware: 3.2-2
Not working one on the 5.4.106-1-pve kernel and the working one on 5.4.103-1-pve
Shows up on my 5.4.103-1-pve node but not on the 5.4.106-1-pve

Did you manage to solve your issue?

Bash:
root@pve:~# dmesg | grep -P 'qla2xxx|qlcnic'
[    2.475589] qlcnic 0000:09:00.0: 2048KB memory map
[    3.105786] qlcnic 0000:09:00.0: Default minidump capture mask 0x1f
[    3.105788] qlcnic 0000:09:00.0: FW dump enabled
[    3.105789] qlcnic 0000:09:00.0: Supports FW dump capability
[    3.105790] qlcnic 0000:09:00.0: Driver v5.3.66, firmware v4.20.1
[    3.135741] qlcnic: a0:48:1c:75:1e:f0: NC523SFP 10Gb 2-port Server Adapter Board Chip rev 0x54
[    3.292019] qlcnic 0000:09:00.0: using msi-x interrupts
[    3.372180] qlcnic 0000:09:00.0: eth4: XGbE port initialized
[    3.372324] qlcnic 0000:09:00.1: 2048KB memory map
[    3.417036] qlcnic 0000:09:00.1: Default minidump capture mask 0x1f
[    3.417037] qlcnic 0000:09:00.1: FW dump enabled
[    3.417049] qlcnic 0000:09:00.1: Supports FW dump capability
[    3.417050] qlcnic 0000:09:00.1: Driver v5.3.66, firmware v4.20.1
[    5.416025] qlcnic 0000:09:00.1: using msi-x interrupts
[    5.416154] qlcnic 0000:09:00.1: eth6: XGbE port initialized
[    5.417359] qlcnic 0000:09:00.0 enp9s0f0: renamed from eth4
[    5.449426] qlcnic 0000:09:00.1 enp9s0f1: renamed from eth6
[   57.519243] qlcnic 0000:09:00.1 enp9s0f1: Rx Context[1] Created, state 0x2
[   57.913969] qlcnic 0000:09:00.1 enp9s0f1: Tx Context[0x8001] Created, state 0x2
[   57.928893] qlcnic 0000:09:00.1 enp9s0f1: Tx Context[0x8009] Created, state 0x2
[   57.944806] qlcnic 0000:09:00.1 enp9s0f1: Tx Context[0x800b] Created, state 0x2
[   57.960728] qlcnic 0000:09:00.1 enp9s0f1: Tx Context[0x800d] Created, state 0x2
[   57.965546] qlcnic 0000:09:00.1 enp9s0f1: active nic func = 2, mac filter size=32
[   60.016292] qlcnic 0000:09:00.1 enp9s0f1: NIC Link is up
[ 2131.817772] qlcnic 0000:09:00.1 enp9s0f1: NIC Link is down
[302114.102012] qlcnic 0000:09:00.1 enp9s0f1: Rx Context[1] Created, state 0x2
[302114.106001] qlcnic 0000:09:00.1 enp9s0f1: Tx Context[0x8001] Created, state 0x2
[302749.406324] qlcnic 0000:09:00.0 enp9s0f0: NIC Link is up
[302757.462932] qlcnic 0000:09:00.0 enp9s0f0: active nic func = 2, mac filter size=32
[302843.640289] qlcnic 0000:09:00.0 enp9s0f0: Rx Context[0] Created, state 0x2
[302843.763588] qlcnic 0000:09:00.0 enp9s0f0: Tx Context[0x8000] Created, state 0x2
[302843.777522] qlcnic 0000:09:00.0 enp9s0f0: Tx Context[0x8008] Created, state 0x2
[302843.793436] qlcnic 0000:09:00.0 enp9s0f0: Tx Context[0x800a] Created, state 0x2
[302843.809350] qlcnic 0000:09:00.0 enp9s0f0: Tx Context[0x800c] Created, state 0x2
[302845.356821] qlcnic 0000:09:00.0 enp9s0f0: NIC Link is up

Bash:
root@pve:~# lsmod | grep ql
qlcnic                315392  0
 
For us... Turns out that the servers being used had a combination of onboard and PCI adapter 10G NICs. What I thought were 4 x 1G onboard NICs turned out to be a 2 x 1G / 2 x 10G. Hence why we were seeing a combination of both QLogic and Intel NIC info. The QLogic was the PCI Adapter and the Intel was the onboard.
 
Thank you for the quick feedback. I've ordered a Broadcom 10gb Nic which just works on my second host.
Though i've tried nearly everything i could not get the card to work on the second machine but it worked on the first one.
I'll put the spare Qlogic in my first host i guess and setup LACP.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!