10g copper cannot get to 10g speed

sili

Renowned Member
Dec 21, 2015
39
2
73
37
Hi, I have connet X552 to DLINK DXS-1210-12TC.
But that speed cannot to 10g speed.
Code:
root@pve:/etc/network# lspci |grep Eth
03:00.0 Ethernet controller: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T
03:00.1 Ethernet controller: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T
06:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
06:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)

Code:
root@pve:/etc/network# ifconfig eth3
eth3      Link encap:Ethernet  HWaddr 00:c0:70:90:c0:00
          inet addr:192.168.1.12  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: f000::004:7000f:0005:c000/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1377 errors:12 dropped:0 overruns:0 frame:12
          TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:103334 (100.9 KiB)  TX bytes:1374 (1.3 KiB)

Code:
root@pve:/etc/network# lspci -vnnk -s 03:00.1
03:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T [8086:15ad]
        Subsystem: Super Micro Computer Inc Device [15d9:15ad]
        Physical Slot: 0-1
        Flags: bus master, fast devsel, latency 0, IRQ 17
        Memory at f9800000 (64-bit, prefetchable) [size=2M]
        Memory at f9c00000 (64-bit, prefetchable) [size=16K]
        Expansion ROM at fb100000 [disabled] [size=512K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable+ Count=64 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 00-00-c9-ff-ff-00-00-00
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
        Capabilities: [1b0] Access Control Services
        Kernel driver in use: ixgbe

Code:
root@pve:/etc/network# modinfo ixgbe
filename:       /lib/modules/4.4.35-1-pve/kernel/drivers/net/ethernet/intel/ixgbe/ixgbe.ko
version:        4.4.6
license:        GPL
description:    Intel(R) 10GbE PCI Express Linux Network Driver
author:         Intel Corporation, <linux.nics@intel.com>
srcversion:     A0F436FF26ECE3DC8455D1E
alias:          pci:v00008086d000015ADsv*sd*bc*sc*i*
alias:          pci:v00008086d000015ACsv*sd*bc*sc*i*
alias:          pci:v00008086d000015ABsv*sd*bc*sc*i*
alias:          pci:v00008086d000015AAsv*sd*bc*sc*i*
alias:          pci:v00008086d000015D1sv*sd*bc*sc*i*
alias:          pci:v00008086d00001563sv*sd*bc*sc*i*
alias:          pci:v00008086d00001560sv*sd*bc*sc*i*
alias:          pci:v00008086d00001558sv*sd*bc*sc*i*
alias:          pci:v00008086d0000154Asv*sd*bc*sc*i*
alias:          pci:v00008086d00001557sv*sd*bc*sc*i*
alias:          pci:v00008086d0000154Fsv*sd*bc*sc*i*
alias:          pci:v00008086d0000154Dsv*sd*bc*sc*i*
alias:          pci:v00008086d00001528sv*sd*bc*sc*i*
alias:          pci:v00008086d000010F8sv*sd*bc*sc*i*
alias:          pci:v00008086d0000151Csv*sd*bc*sc*i*
alias:          pci:v00008086d00001529sv*sd*bc*sc*i*
alias:          pci:v00008086d0000152Asv*sd*bc*sc*i*
alias:          pci:v00008086d000010F9sv*sd*bc*sc*i*
alias:          pci:v00008086d00001514sv*sd*bc*sc*i*
alias:          pci:v00008086d00001507sv*sd*bc*sc*i*
alias:          pci:v00008086d000010FBsv*sd*bc*sc*i*
alias:          pci:v00008086d00001517sv*sd*bc*sc*i*
alias:          pci:v00008086d000010FCsv*sd*bc*sc*i*
alias:          pci:v00008086d000010F7sv*sd*bc*sc*i*
alias:          pci:v00008086d00001508sv*sd*bc*sc*i*
alias:          pci:v00008086d000010DBsv*sd*bc*sc*i*
alias:          pci:v00008086d000010F4sv*sd*bc*sc*i*
alias:          pci:v00008086d000010E1sv*sd*bc*sc*i*
alias:          pci:v00008086d000010F1sv*sd*bc*sc*i*
alias:          pci:v00008086d000010ECsv*sd*bc*sc*i*
alias:          pci:v00008086d000010DDsv*sd*bc*sc*i*
alias:          pci:v00008086d0000150Bsv*sd*bc*sc*i*
alias:          pci:v00008086d000010C8sv*sd*bc*sc*i*
alias:          pci:v00008086d000010C7sv*sd*bc*sc*i*
alias:          pci:v00008086d000010C6sv*sd*bc*sc*i*
alias:          pci:v00008086d000010B6sv*sd*bc*sc*i*
depends:        ptp,dca,vxlan
vermagic:       4.4.35-1-pve SMP mod_unload modversions
parm:           InterruptType:Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default IntMode (deprecated) (array of int)
parm:           IntMode:Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default 2 (array of int)
parm:           MQ:Disable or enable Multiple Queues, default 1 (array of int)
parm:           DCA:Disable or enable Direct Cache Access, 0=disabled, 1=descriptor only, 2=descriptor and data (array of int)
parm:           RSS:Number of Receive-Side Scaling Descriptor Queues, default 0=number of cpus (array of int)
parm:           VMDQ:Number of Virtual Machine Device Queues: 0/1 = disable (1 queue) 2-16 enable (default=8) (array of int)
parm:           max_vfs:Number of Virtual Functions: 0 = disable (default), 1-63 = enable this many VFs (array of int)
parm:           VEPA:VEPA Bridge Mode: 0 = VEB (default), 1 = VEPA (array of int)
parm:           InterruptThrottleRate:Maximum interrupts per second, per vector, (0,1,956-488281), default 1 (array of int)
parm:           LLIPort:Low Latency Interrupt TCP Port (0-65535) (array of int)
parm:           LLIPush:Low Latency Interrupt on TCP Push flag (0,1) (array of int)
parm:           LLISize:Low Latency Interrupt on Packet Size (0-1500) (array of int)
parm:           LLIEType:Low Latency Interrupt Ethernet Protocol Type (array of int)
parm:           LLIVLANP:Low Latency Interrupt on VLAN priority threshold (array of int)
parm:           FdirPballoc:Flow Director packet buffer allocation level:
                        1 = 8k hash filters or 2k perfect filters
                        2 = 16k hash filters or 4k perfect filters
                        3 = 32k hash filters or 8k perfect filters (array of int)
parm:           AtrSampleRate:Software ATR Tx packet sample rate (array of int)
parm:           FCoE:Disable or enable FCoE Offload, default 1 (array of int)
parm:           MDD:Malicious Driver Detection: (0,1), default 1 = on (array of int)
parm:           LRO:Large Receive Offload (0,1), default 0 = off (array of int)
parm:           allow_unsupported_sfp:Allow unsupported and untested SFP+ modules on 82599 based adapters, default 0 = Disable (array of int)
parm:           dmac_watchdog:DMA coalescing watchdog in microseconds (0,41-10000), default 0 = off (array of int)
parm:           vxlan_rx:VXLAN receive checksum offload (0,1), default 1 = Enable (array of int)

I have try to passthrough 03:00.1 to vm(OS is ubuntu 16.04 LTS),
and I can see that ethernet card get 10g speed,
but in my host, just get 1g speed.

Any help would be greatly appreciated.
 
And I have try to upgrade ixgbe driver,
but upgrade failed.

Code:
root@pve#apt-get install pve-kernel-4.4.35-1-pve pve-headers-4.4.35-1-pve
root@pve#apt-get install gcc-4.9 make build-essential
root@pve:/usr/local/src/ixgbe/ixgbe-4.5.4/src# make install
make[1]: Entering directory '/usr/src/linux-headers-4.4.35-1-pve'
  CC [M]  /usr/local/src/ixgbe/ixgbe-4.5.4/src/ixgbe_main.o
In file included from /usr/local/src/ixgbe/ixgbe-4.5.4/src/ixgbe_osdep.h:38:0,
                 from /usr/local/src/ixgbe/ixgbe-4.5.4/src/ixgbe_type.h:66,
                 from /usr/local/src/ixgbe/ixgbe-4.5.4/src/ixgbe_dcb.h:28,
                 from /usr/local/src/ixgbe/ixgbe-4.5.4/src/ixgbe.h:45,
                 from /usr/local/src/ixgbe/ixgbe-4.5.4/src/ixgbe_main.c:52:
/usr/local/src/ixgbe/ixgbe-4.5.4/src/kcompat.h:5160:20: error: static declaration of ‘napi_consume_skb’ follows non-static declaration
 static inline void napi_consume_skb(struct sk_buff *skb,
                    ^
In file included from include/linux/if_ether.h:23:0,
                 from include/uapi/linux/ethtool.h:17,
                 from include/linux/ethtool.h:17,
                 from include/linux/netdevice.h:42,
                 from /usr/local/src/ixgbe/ixgbe-4.5.4/src/ixgbe_main.c:31:
include/linux/skbuff.h:2346:6: note: previous declaration of ‘napi_consume_skb’ was here
 void napi_consume_skb(struct sk_buff *skb, int budget);
      ^
scripts/Makefile.build:258: recipe for target '/usr/local/src/ixgbe/ixgbe-4.5.4/src/ixgbe_main.o' failed
make[2]: *** [/usr/local/src/ixgbe/ixgbe-4.5.4/src/ixgbe_main.o] Error 1
Makefile:1420: recipe for target '_module_/usr/local/src/ixgbe/ixgbe-4.5.4/src' failed
make[1]: *** [_module_/usr/local/src/ixgbe/ixgbe-4.5.4/src] Error 2
make[1]: Leaving directory '/usr/src/linux-headers-4.4.35-1-pve'
Makefile:107: recipe for target 'default' failed
make: *** [default] Error 2

Code:
root@pve# pveversion --verbose
proxmox-ve: 4.4-77 (running kernel: 4.4.35-1-pve)
pve-manager: 4.4-5 (running version: 4.4-5/c43015a5)
pve-kernel-4.4.35-1-pve: 4.4.35-77
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-102
pve-firmware: 1.1-10
libpve-common-perl: 4.0-85
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-71
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.4-1
pve-qemu-kvm: 2.7.0-10
pve-container: 1.0-90
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.6-5
lxcfs: 2.0.5-pve2
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
 
What does "ip a" show?

Code:
root@pve:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 0c:c4:0a:90:c0:0e brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.52/24 brd 192.168.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::ec4:7aff:fe95:c9fe/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0c:c4:0a:90:09:0f brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0c:c0:00:95:c0:01 brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:c0:70:90:c0:00 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.12/24 brd 192.168.1.255 scope global eth3
       valid_lft forever preferred_lft forever
    inet6 fe80::ec4:7aff:fe95:cde7/64 scope link
       valid_lft forever preferred_lft foreve
 
Have you made sure the traffic is not routing over the eth0 (1Gbps)

I have the same server, and I have to try it.
When I just use eth3(10g ethernet port),
I got the same speed(1Gbps)

Code:
root@pve2:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
  valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
  valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
  link/ether 0c:c4:00:95:00:02 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
  link/ether 0c:c4:00:95:00:03 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
  link/ether 0c:c4:00:95:00:ea brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
  link/ether 0c:c4:00:95:01:eb brd ff:ff:ff:ff:ff:ff
  inet 192.168.1.51/24 brd 192.168.1.255 scope global eth3
  valid_lft forever preferred_lft forever
  inet6 fe80::ec4:7aff:fe95:cdeb/64 scope link
  valid_lft forever preferred_lft forever
 
And the device you are testing speed to is 10Gbps?

Yeah, what are you doing to test?

Personally over my intel 10gig copper interfaces, I get anywhere from 9.25Gbit/s to about 9.75Gbit/s when I test with iPerf, confirming that the interfaces are working properly.

Transferring files over NFS - however - I seem to only average about 160-200MB/s which is disappointing, especially since my ZFS pool locally on the server can read at over 800MB/s sequentially.

I feel like there is some NFS tuning I need to do.

On a side note, I have found that mii-tool always reports that the link is in 1000mbit half duplex, so I am guessing mii-tool isn't fully compatible with my Intel 10gig NIC's, as I know for a fact that is wrong, looking at the iperf output.

Also, what kind of cable are you using? I know everyone says Cat6 is OK for short runs, but in practice, I have seen some very mixed results. Cat 6a or Cat 7 is highly recommended.
 
Yeah, what are you doing to test?

Personally over my intel 10gig copper interfaces, I get anywhere from 9.25Gbit/s to about 9.75Gbit/s when I test with iPerf, confirming that the interfaces are working properly.

Transferring files over NFS - however - I seem to only average about 160-200MB/s which is disappointing, especially since my ZFS pool locally on the server can read at over 800MB/s sequentially.

I feel like there is some NFS tuning I need to do.

On a side note, I have found that mii-tool always reports that the link is in 1000mbit half duplex, so I am guessing mii-tool isn't fully compatible with my Intel 10gig NIC's, as I know for a fact that is wrong, looking at the iperf output.

Also, what kind of cable are you using? I know everyone says Cat6 is OK for short runs, but in practice, I have seen some very mixed results. Cat 6a or Cat 7 is highly recommended.

I'm use ethtool to check that copper speed.
And use cat 6, maybe I'll try cat 6a or cat 7.
Code:
root@pve:~# ethtool eth3
Settings for eth3:
        Supported ports: [ TP ]
        Supported link modes:   1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Advertised link modes:  1000baseT/Full
                                10000baseT/Full
        Advertised pause frame use: Symmetric
        Advertised auto-negotiation: Yes
        Speed: 1000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: external
        Auto-negotiation: on
        MDI-X: Unknown
        Supports Wake-on: umbg
        Wake-on: g
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: yes
 
I'm use ethtool to check that copper speed.
And use cat 6, maybe I'll try cat 6a or cat 7.
Code:
root@pve:~# ethtool eth3
Settings for eth3:
        Supported ports: [ TP ]
        Supported link modes:   1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Advertised link modes:  1000baseT/Full
                                10000baseT/Full
        Advertised pause frame use: Symmetric
        Advertised auto-negotiation: Yes
        Speed: 1000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: external
        Auto-negotiation: on
        MDI-X: Unknown
        Supports Wake-on: umbg
        Wake-on: g
        Current message level: 0x00000007 (7)
                               drv probe link
        Link detected: yes

Ethtool does properly report my link as 10000 where mii-tool failed.

Yeah, my guess would then be the cable.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!