tun: unexpected GSO type

R4PS

New Member
Jul 24, 2020
3
0
1
33
So this week i upgraded to PVE 6 an now my syslog is getting spammed with this messages:

Code:
Jul 24 11:26:21 pm11 kernel: [169960.496367] tun: unexpected GSO type: 0x0, gso_size 140, hdr_len 206
Jul 24 11:26:21 pm11 kernel: [169960.496381] tun: unexpected GSO type: 0x0, gso_size 140, hdr_len 206
Jul 24 11:26:21 pm11 kernel: [169960.496415] tun: unexpected GSO type: 0x0, gso_size 140, hdr_len 206
Jul 24 11:26:21 pm11 kernel: [169960.496417] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jul 24 11:26:21 pm11 kernel: [169960.496418] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jul 24 11:26:21 pm11 kernel: [169960.496419] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jul 24 11:26:21 pm11 kernel: [169960.496420] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jul 24 11:26:21 pm11 kernel: [169960.496808] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jul 24 11:26:21 pm11 kernel: [169960.497181] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jul 24 11:26:21 pm11 kernel: [169960.497182] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jul 24 11:26:21 pm11 kernel: [169960.497182] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jul 24 11:26:21 pm11 kernel: [169960.497183] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jul 24 11:26:21 pm11 kernel: [169960.500125] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jul 24 11:26:21 pm11 kernel: [169960.500438] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jul 24 11:26:21 pm11 kernel: [169960.500673] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................

Code:
pm11 ~ # pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-10 (running version: 6.2-10/a20769ed)
pve-kernel-5.4: 6.2-4
pve-kernel-helper: 6.2-4
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-4.15: 5.4-19
pve-kernel-4.15.18-30-pve: 4.15.18-58
ceph: 14.2.9-pve1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-5
libpve-guest-common-perl: 3.1-1
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-9
pve-cluster: 6.1-8
pve-container: 3.1-11
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-11
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-10
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

Code:
pm11 ~ # ethtool -i ens3f0              
driver: ixgbe
version: 5.1.0-k
firmware-version: 0x000161c1
expansion-rom-version:
bus-info: 0000:02:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

Code:
pm11 ~ # lspci | grep -i eth
02:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
02:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
03:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
03:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)

Code:
pm11 ~ # cat /etc/network/interfaces
auto lo
iface lo inet loopback

#auto enp3s0f1
#iface enp3s0f1 inet static
#       address 172.25.4.11
#       netmask 255.255.255.0
#       gateway 172.25.4.1

# 1G bond
auto bond1
iface bond1 inet manual
        bond_mode 802.3ad
        bond_miimon 100
        bond_downdelay 200
        bond_updelay 200
        bond_slaves enp3s0f0 enp3s0f1
        bond_xmit_hash_policy layer3+4
        bond_lacp_rate fast

# 1G basic bridge
auto vmbr1
iface vmbr1 inet static
        bridge_ports bond1
        bridge_stp off
        bridge_fd 0
        address 0.0.0.0

# 1G hv vlan init
auto vmbr1v254
iface vmbr1v254 inet static
        bridge_ports bond1.254
        bridge_stp off
        bridge_fd 0
        address 0.0.0.0

# 1G hv adapter
auto hv1
iface hv1 inet static
        pre-up ip link add link vmbr1v254 name hv1 type macvtap
        pre-up ip link set hv1 address 1a:2b:3c:34:b7:01 up
        post-down ip link del dev hv1
        address 172.25.4.11/24
        gateway 172.25.4.1

# 10G bond
auto bond0
iface bond0 inet manual
        bond_mode 802.3ad
        bond_miimon 100
        bond_downdelay 200
        bond_updelay 200
        bond_slaves ens3f0 ens3f1
        bond_xmit_hash_policy layer3+4
        bond_lacp_rate fast
        mtu 9032

# 10G basic bridge
auto vmbr0
iface vmbr0 inet static
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0
        address 0.0.0.0
        mtu 9032

# 10G hv ClusterNET vlan init
auto vmbr0v255
iface vmbr0v255 inet static
        bridge_ports bond0.255
        bridge_stp off
        bridge_fd 0
        address 0.0.0.0
        mtu 9032

# 10G hv ClusterNET adapter
auto hv0
iface hv0 inet static
        pre-up ip link add link vmbr0v255 name hv0 type macvtap
        pre-up ip link set hv0 address 1a:2b:3c:53:9E:C4 up
        post-down ip link del dev hv0
        address 172.25.5.11/24
        mtu 9032

# 10G hv BackupNET init
auto vmbr0v168
iface vmbr0v168 inet static
        bridge_ports bond0.168
        bridge_stp off
        bridge_fd 0
        address 0.0.0.0
        mtu 1500

# 10G hv BackupNet adapter
auto hv2
iface hv2 inet static
        pre-up ip link add link vmbr0v168 name hv2 type macvtap
        pre-up ip link set hv2 address 1a:2b:3c:a3:10:ad up
        post-down ip link del dev hv2
        address 192.168.0.11/23
        mtu 1500

yes im using jumbo frames for the ceph network

i think its a Firmware / Kernel Issue -> https://www.kernel.org/doc/Documentation/networking/segmentation-offloads.txt

As a workaround: Is it save to run PVE6 and CEPH 14 with the old Kernel ?
vmlinuz-4.15.18-30-pve ??
 
no. but can try installing one of the older PVE 6 kernels:
Code:
$ apt list 'pve-kernel-5.4*'
Listing... Done
pve-kernel-5.4-libc-dev/now 5.4.41-2 amd64 [installed,local]
pve-kernel-5.4.22-1-pve/testing,stable,now 5.4.22-1 amd64 [installed,auto-removable]
pve-kernel-5.4.24-1-pve/testing,stable,now 5.4.24-1 amd64 [installed,auto-removable]
pve-kernel-5.4.27-1-pve/testing,stable,stable,now 5.4.27-1 amd64 [installed,auto-removable]
pve-kernel-5.4.30-1-pve/testing,stable,stable,now 5.4.30-1 amd64 [installed,auto-removable]
pve-kernel-5.4.34-1-pve/testing,stable,stable,now 5.4.34-2 amd64 [installed,auto-removable]
pve-kernel-5.4.41-1-pve/now 5.4.41-2 amd64 [installed,local]
pve-kernel-5.4.44-1-pve/testing,stable,stable,now 5.4.44-1 amd64 [installed]
pve-kernel-5.4.44-2-pve/testing,stable,stable,now 5.4.44-2 amd64 [installed]
pve-kernel-5.4/testing,stable,stable,now 6.2-4 all [installed]

and report back, maybe that helps in narrowing down a culprit..
 
i testet the oldest one 5.4.22-1, but no change happend so im going to suppress the messages

/etc/rsyslog.d/tun.conf
Code:
:msg, contains, "tun:"    ~
 
I'm also seeing this issue for the last week or two (unsure exactly which update introduced it). I found some articles that mention it could be related to hardware offloading: https://www.hardwarecrash.de/index....xmox-6-konsole-fehler-unexpected-gso-type-0x0. I disabled these options with ethtool, and so far, no more messages. I am not clear if the messages are actually an indication that there is really a problem or if they're just cosmetic, but will do some testing and see.

As an interesting aside, I see this only in servers with Broadcom ("Broadcom Limited NetXtreme II BCM5709 Gigabit Ethernet (rev 20)") NICs (my servers with Intel NICs are fine).
 
I dont want to disable Hardware features from expensive hardware and waste CPU Performance, if there is a good way around it.

But this week, i have encountered some wired behavior and crashes with corosync/pacemaker in a new deployed IMAP Cluster which is running on the PVE Cluster. Random crashes/and freezes, also other VMs weren't good reachable.

There is definitely a bug in macvtap because the first VM that was booted on the HV and uses the same vlan/network could see and capture the complete HV incoming traffic ! this bug also exists in vmlinuz-4.15.18-30-pve / PVE5 !

After testing around with some settings -> https://wenchma.github.io/2016/09/07/macvtap-deep-dive.html
i removed my macvtap devices and moved my network configuration directly to the bridge interfaces. The messages are now gone :)

what i dont know yet:
if this fixes my corosync/pacemaker problems
if this problem exists with LXC Containers

@wasteground are you using some LXC / Docker or some macvtap/macvlan stuff?
 
@R4PS yep, both the servers I see this issue on are running LXC containers along with VMs. It actually turns out that disabling the hardware offload stuff has not resolved the log messages (but there are definitely less of them).

I did also recently see some strange behaviour that sounds sort of similar to what you describe - I have a VM on each proxmox server (2 servers) in the cluster acting as routers, doing VRRP... only sometimes if I fail over to the backup router, traffic is blackholed if it loops through the virtual network stack more than once. Haven't had a chance to really dig in to that, but I am now wondering if the messages and that issue are related. I solved it temporarily by removing the dependence on VRRP, and to be honest, had assumed it's either an issue with the VMs doing routing (VyOS and opnsense), or the proxmox virtual network stuff... but as I say, didn't have time to figure out exactly where the issue is or what it is.
 
got hit myself with this.
proxmox-ve: 6.2-1 (running kernel: 5.4.44-1-pve)
pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754)
pve-kernel-5.4: 6.2-3
pve-kernel-helper: 6.2-3
pve-kernel-5.3: 6.1-6
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.44-1-pve: 5.4.44-1
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-3
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-7
pve-cluster: 6.1-8
pve-container: 3.1-8
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-3
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1

So before I go get downtime to upgrade kernels, I wonder whether there were any known "fixes" for this issue?

okay, now when did it start? When I fired up a LXC to receive (read 10Mbps continuous data feed) https://Netdata.cloud agent data to this collector. The veth is a tagged vlan on an OpenVSwitch bridge

going through all the ethtool options over my interfaces, the "solution" I found: `ethtool -K veth${containerID}i${interfaceNumber} tx off `
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!