Battling to get 10Gb pci card working.

watnow101

Active Member
Apr 9, 2018
15
0
41
42
Hi

I purchased 10GB TN9510 cards to use for my Ceph network, but cannot seem to get them working.

dmesg shows the following.

root@proxmox1:~# dmesg | grep tn40xx
[ 6.444671] tn40xx: Tehuti Network Driver, 0.3.6.15
[ 6.444740] tn40xx: Supported phys : QT2025 TLK10232 AQR105 MUSTANG
[ 6.444883] tn40xx 0000:01:00.0: enabling device (0000 -> 0002)
[ 6.445027] tn40xx: srom 0xffffffff HWver 65535 build 4294967295 lane# 4 max_pl 0x0 mrrs 0x2
[ 6.571198] tn40xx: MDIO busy!
[ 6.876684] tn40xx: MDIO busy!
[ 6.977789] tn40xx: MDIO busy!
[ 7.003054] tn40xx: MDIO busy!
[ 7.020511] tn40xx: MDIO busy!
[ 7.161440] tn40xx: MDIO busy!
[ 7.294607] tn40xx: MDIO busy!
[ 7.426716] tn40xx: MDIO busy!
[ 7.564021] tn40xx: MDIO busy!
[ 7.695989] tn40xx: MDIO busy!
[ 7.780149] tn40xx: MDIO busy!
[ 7.856772] tn40xx: MDIO busy!
[ 7.876502] tn40xx: MDIO busy!
[ 7.900512] tn40xx: MDIO busy!
[ 7.920501] tn40xx: MDIO busy!
[ 7.940504] tn40xx: MDIO busy!
[ 7.960505] tn40xx: MDIO busy!
[ 7.980499] tn40xx: MDIO busy!
[ 8.247279] tn40xx: MDIO busy!
[ 8.264500] tn40xx: MDIO busy!
[ 8.284492] tn40xx: MDIO busy!
[ 8.304501] tn40xx: MDIO busy!
[ 8.324503] tn40xx: MDIO busy!
[ 8.344499] tn40xx: MDIO busy!
[ 8.364497] tn40xx: MDIO busy!
[ 8.384499] tn40xx: MDIO busy!
[ 8.404502] tn40xx: MDIO busy!
[ 8.424494] tn40xx: MDIO busy!
[ 8.444499] tn40xx: MDIO busy!
[ 8.464497] tn40xx: MDIO busy!
[ 8.484502] tn40xx: MDIO busy!
[ 8.504499] tn40xx: MDIO busy!
[ 8.524496] tn40xx: MDIO busy!
[ 8.524518] tn40xx: PHY not found
[ 8.632016] tn40xx: PHY detected on port 1 ID=FFFFFFFF - Native 10Gbps CX4
[ 8.632055] tn40xx: PHY type by svid 7 found 1
[ 8.632905] tn40xx: fw 0xffffffff
[ 8.633319] tn40xx: eth2, Port A
[ 8.633733] tn40xx: 1 1fc9:4025:1186:2900
[ 8.634115] tn40xx: detected 1 cards, 1 loaded
root@proxmox1:~#


root@proxmox1:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether 00:30:48:f5:0a:ab brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 00:30:48:f5:0a:aa brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 3000
link/ether ff:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
6: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:30:48:f5:0a:ab brd ff:ff:ff:ff:ff:ff

Then eth2 stays in a DOWN state, even after I do "ifdown eth2 && ifup eth2"
 
I never tried such a card but generally speaking, if you need high performance networking for Ceph, we recommend to use highest quality hardware with best driver support and not the cheapest you can get.

(e.g. Intel or Mellanox)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!