TSO offloading problem with igb whilst, ixgbe is fine

Jun 8, 2016
344
69
68
47
Johannesburg, South Africa
Most of our Proxmox clusters utilise Intel 82599ES 10GbE SFP+ NICs where TSO (TCP/IP Segmentation Offloading) works as expected for VMs where their VirtIO NIC then also has TSO enabled.

We however have a cluster made up of 3 x Lenovo RD350 servers where Intel 82599ES 10GbE NICs are used for Ceph replication traffic and VMs are bridged using LACP bonds made up of Intel i210 1GbE NICs.

Is this something I should raise with Lenovo or is this a kernel / driver issue?

From the host:
[root@kvm6a ~]# ethtool -k eth0 | grep segmentation
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp-mangleid-segmentation: off
tx-tcp6-segmentation: on
generic-segmentation-offload: on
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: on
tx-gre-csum-segmentation: on
tx-ipxip4-segmentation: on
tx-ipxip6-segmentation: on
tx-udp_tnl-segmentation: on
tx-udp_tnl-csum-segmentation: on
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: off [fixed]
tx-udp-segmentation: off [fixed]

[root@kvm6a ~]# ethtool -i eth0
driver: igb
version: 5.4.0-k
firmware-version: 3.31, 0x800005cc
expansion-rom-version:
bus-info: 0000:06:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

From a Debian 10 VM:
[admin@debian10vm ~]# ethtool -k eth0 | grep segmentation
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: on
tx-tcp-mangleid-segmentation: off
tx-tcp6-segmentation: on

generic-segmentation-offload: on
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-gre-csum-segmentation: off [fixed]
tx-ipxip4-segmentation: off [fixed]
tx-ipxip6-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
tx-udp_tnl-csum-segmentation: off [fixed]
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: off [fixed]
tx-udp-segmentation: off [fixed]

Speedtest.net result:
[admin@debian10vm ~]# ./speedtest_cli.py --server 23339
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from Syrex (Pty) Ltd (41.79.21.90)...
Hosted by Syrex (Johannesburg) [3.18 km]: 1.932 ms
Testing download speed........................................
Download: 193.19 Mbit/s
Testing upload speed..................................................
Upload: 17.55 Mbit/s

If we disable TSO on the guest:
[admin@debian10vm ~]# ethtool -K eth0 tso off
[admin@debian10vm ~]# ethtool -k eth0 | grep segmentation
tcp-segmentation-offload: off
tx-tcp-segmentation: off
tx-tcp-ecn-segmentation: off
tx-tcp-mangleid-segmentation: off
tx-tcp6-segmentation: off

generic-segmentation-offload: on
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-gre-csum-segmentation: off [fixed]
tx-ipxip4-segmentation: off [fixed]
tx-ipxip6-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
tx-udp_tnl-csum-segmentation: off [fixed]
tx-sctp-segmentation: off [fixed]
tx-esp-segmentation: off [fixed]
tx-udp-segmentation: off [fixed]

[admin@debian10vm ~]# ./speedtest_cli.py --server 23339
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Testing from Syrex (Pty) Ltd (41.79.21.90)...
Hosted by Syrex (Johannesburg) [3.18 km]: 1.844 ms
Testing download speed........................................
Download: 191.94 Mbit/s
Testing upload speed..................................................
Upload: 523.06 Mbit/s


We are able to utilise all hardware acceleration offloading capabilities when using ixgbe. We are using Open vSwitch (OvS) with the following network configuration:
auto lo
iface lo inet loopback

allow-vmbr0 bond0
iface bond0 inet manual
ovs_bridge vmbr0
ovs_type OVSBond
ovs_bonds eth0 eth1
pre-up ( ifconfig eth0 mtu 9216 && ifconfig eth1 mtu 9216 )
ovs_options bond_mode=balance-tcp lacp=active other_config:lacp-time=fast tag=1 vlan_mode=native-untagged
mtu 9216

auto vmbr0
allow-ovs vmbr0
iface vmbr0 inet manual
ovs_type OVSBridge
ovs_ports bond0 vlan1
mtu 9216

allow-vmbr0 vlan1
iface vlan1 inet static
ovs_type OVSIntPort
ovs_bridge vmbr0
ovs_options tag=1
ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
address 10.19.14.34
netmask 255.255.255.224
gateway 10.19.14.33
mtu 9216

allow-vmbr1 bond1
iface bond1 inet manual
ovs_bridge vmbr1
ovs_type OVSBond
ovs_bonds eth2 eth3
pre-up ( ifconfig eth2 mtu 9216 && ifconfig eth3 mtu 9216 )
ovs_options bond_mode=balance-tcp lacp=active other_config:lacp-time=fast tag=1 vlan_mode=native-untagged
mtu 9216

auto vmbr1
allow-ovs vmbr1
iface vmbr1 inet manual
ovs_type OVSBridge
ovs_ports bond1 vlan18
mtu 9216

allow-vmbr1 vlan18
iface vlan18 inet static
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=18
ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
address 10.254.1.2
netmask 255.255.255.0
mtu 9212

Interface vlan1 is attached to vmbr0, internet breakout and management of PVE cluster. Interface vlan18 is attached to vmbr1 and used for Ceph replication traffic. We typically attach guests to vmbr0 and experience the problem above, moving the guest to vmbr1 allows us to attain full speed with TSO enabled.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!