Mellanox ConnectX-3 MT27500 Family - Issue with 1Gbe SFP module


May 28, 2021
Happy belated New Year! I'm trying to get a Mellanox device to recognize a 1Gbe module under latest Proxmox - oddly enough, under "ethtool -m" it sees the device just fine . . . I got a DAC cable on one port and it works fine with a QNAP 2108C switch - but wanted to use the other port with a 1Gbe SFP module (QSFPTEK) - and even though I can see the module using ethtool (see below) - I can't get it to work. Tried both LinuxMint and Proxmox so far . . . Maybe this module is simply not compatible but not sure if I'm doing something wrong - device seems up but can't ping in or out - just itself - I'm running the latest edge kernel 6.0.15 and latest Mellanox firmware:

lspci |grep -i mellanox
05:00.0 Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3]

lshw -class network -businfo
Bus info Device Class Description
pci@0000:01:00.0 enp1s0 network Ethernet Controller I225-V
pci@0000:02:00.0 enp2s0 network Ethernet Controller I225-V
pci@0000:03:00.0 enp3s0 network Ethernet Controller I225-V
pci@0000:05:00.0 enp5s0 network MT27500 Family [ConnectX-3]
pci@0000:05:00.0 enp5s0d1 network Ethernet interface
vmbr0 network Ethernet interface

root@R86SFox:~# ./mlxup
Querying Mellanox devices firmware ...

Device #1:
Device Type: ConnectX3
Part Number: MCX342A-XCC_Ax
Description: ConnectX-3 EN NIC for OCP;10GbE;dual-port SFP+;PCIe3.0 x8;IPMI disabled;R6
PSID: MT_1680110023
PCI Device Name: 0000:05:00.0
Port1 MAC: 0002c9ca13ec
Port2 MAC: 0002c9ca13ed
Versions: Current Available
FW 2.42.5000 2.42.5000
PXE 3.4.0752 3.4.0752
UEFI 14.11.0045 14.11.0045

Status: Up to date

dmesg | grep mlx
[ 1.531528] mlx4_core: Mellanox ConnectX core driver v4.0-0
[ 1.531551] mlx4_core: Initializing 0000:05:00.0
[ 8.945175] mlx4_core 0000:05:00.0: DMFS high rate steer mode is: disabled performance optimized steering
[ 8.945541] mlx4_core 0000:05:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0000:00:1c.4 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)
[ 8.987351] mlx4_en: Mellanox ConnectX HCA Ethernet driver v4.0-0
[ 8.987602] mlx4_en 0000:05:00.0: Activating port:1
[ 8.990136] mlx4_en: 0000:05:00.0: Port 1: Using 4 TX rings
[ 8.990143] mlx4_en: 0000:05:00.0: Port 1: Using 4 RX rings
[ 8.990538] mlx4_en: 0000:05:00.0: Port 1: Initializing port
[ 8.991237] mlx4_en 0000:05:00.0: registered PHC clock
[ 8.991432] mlx4_en 0000:05:00.0: Activating port:2
[ 8.991887] mlx4_en: 0000:05:00.0: Port 2: Using 4 TX rings
[ 8.991889] mlx4_en: 0000:05:00.0: Port 2: Using 4 RX rings
[ 8.992121] mlx4_en: 0000:05:00.0: Port 2: Initializing port
[ 8.992673] mlx4_core 0000:05:00.0 enp5s0: renamed from eth0
[ 9.000504] mlx4_en: enp5s0: Link Up
[ 9.017359] mlx4_core 0000:05:00.0 enp5s0d1: renamed from eth1
[ 9.020558] <mlx4_ib> mlx4_ib_add: mlx4_ib: Mellanox ConnectX InfiniBand driver v4.0-0
[ 9.021495] <mlx4_ib> mlx4_ib_add: counter index 2 for port 1 allocated 1
[ 9.021498] <mlx4_ib> mlx4_ib_add: counter index 3 for port 2 allocated 1
[ 11.543580] mlx4_en: enp5s0d1: Steering Mode 1
[ 11.554598] mlx4_en: enp5s0d1: Link Down
[ 11.604491] mlx4_en: enp5s0: Steering Mode 1
[ 11.614840] mlx4_en: enp5s0: Link Up

lsmod | grep mlx
mlx4_ib 196608 0
ib_uverbs 147456 1 mlx4_ib
mlx4_en 122880 0
ib_core 385024 6 rdma_cm,mlx4_ib,iw_cm,ib_iser,ib_uverbs,ib_cm
mlx4_core 327680 2 mlx4_ib,mlx4_en

root@R86SFox:~# ethtool -i enp5s0d1
driver: mlx4_en
version: 4.0-0
firmware-version: 2.42.5000
bus-info: 0000:05:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes

root@R86SFox:~# ethtool -m enp5s0d1
Identifier : 0x03 (SFP)
Extended identifier : 0x04 (GBIC/SFP defined by 2-wire interface ID)
Connector : 0x00 (unknown or unspecified)
Transceiver codes : 0x00 0x00 0x00 0x08 0x00 0x00 0x00 0x00 0x01
Transceiver type : Ethernet: 1000BASE-T
Transceiver type : Extended: 100G AOC or 25GAUI C2M AOC with worst BER of 5x10^(-5)
Encoding : 0x01 (8B/10B)
BR, Nominal : 1300MBd
Rate identifier : 0x00 (unspecified)
Length (SMF,km) : 0km
Length (SMF) : 0m
Length (50um) : 0m
Length (62.5um) : 0m
Length (Copper) : 100m
Length (OM3) : 0m
Laser wavelength : 0nm
Vendor name : QSFPTEK
Vendor OUI : 00:00:00
Vendor PN : QT-SFP-T
Vendor rev : A
Option values : 0x00 0x10
Option : TX_DISABLE implemented
BR margin, max : 0%
BR margin, min : 0%
Vendor SN : BQT220317200
Date code : 220321
Optical diagnostics support : No

root@R86SFox:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:f0:cb:ee:c0:ab brd ff:ff:ff:ff:ff:ff
3: enp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:f0:cb:ee:c0:ac brd ff:ff:ff:ff:ff:ff
4: enp3s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 00:f0:cb:ee:c0:ad brd ff:ff:ff:ff:ff:ff
5: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
link/ether 00:02:c9:ca:13:ec brd ff:ff:ff:ff:ff:ff
6: enp5s0d1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 00:02:c9:ca:13:ed brd ff:ff:ff:ff:ff:ff
inet scope global enp5s0d1
valid_lft forever preferred_lft forever
7: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:02:c9:ca:13:ec brd ff:ff:ff:ff:ff:ff
inet scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::202:c9ff:feca:13ec/64 scope link
valid_lft forever preferred_lft forever

Any hints welcome!
Last edited:
  • Like
Reactions: shurik
Hey! Did you solve your problem? I got 2 singleport mellanox cards in 2 proxmox 8 servers, they just dont want to work... I booted the server with parted magic once, then they worked. very strange...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!