So this week i upgraded to PVE 6 an now my syslog is getting spammed with this messages:
yes im using jumbo frames for the ceph network
i think its a Firmware / Kernel Issue -> https://www.kernel.org/doc/Documentation/networking/segmentation-offloads.txt
As a workaround: Is it save to run PVE6 and CEPH 14 with the old Kernel ?
vmlinuz-4.15.18-30-pve ??
Code:
Jul 24 11:26:21 pm11 kernel: [169960.496367] tun: unexpected GSO type: 0x0, gso_size 140, hdr_len 206
Jul 24 11:26:21 pm11 kernel: [169960.496381] tun: unexpected GSO type: 0x0, gso_size 140, hdr_len 206
Jul 24 11:26:21 pm11 kernel: [169960.496415] tun: unexpected GSO type: 0x0, gso_size 140, hdr_len 206
Jul 24 11:26:21 pm11 kernel: [169960.496417] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Jul 24 11:26:21 pm11 kernel: [169960.496418] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Jul 24 11:26:21 pm11 kernel: [169960.496419] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Jul 24 11:26:21 pm11 kernel: [169960.496420] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Jul 24 11:26:21 pm11 kernel: [169960.496808] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Jul 24 11:26:21 pm11 kernel: [169960.497181] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Jul 24 11:26:21 pm11 kernel: [169960.497182] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Jul 24 11:26:21 pm11 kernel: [169960.497182] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Jul 24 11:26:21 pm11 kernel: [169960.497183] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Jul 24 11:26:21 pm11 kernel: [169960.500125] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Jul 24 11:26:21 pm11 kernel: [169960.500438] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Jul 24 11:26:21 pm11 kernel: [169960.500673] tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Code:
pm11 ~ # pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.44-2-pve)
pve-manager: 6.2-10 (running version: 6.2-10/a20769ed)
pve-kernel-5.4: 6.2-4
pve-kernel-helper: 6.2-4
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-4.15: 5.4-19
pve-kernel-4.15.18-30-pve: 4.15.18-58
ceph: 14.2.9-pve1
ceph-fuse: 14.2.9-pve1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-5
libpve-guest-common-perl: 3.1-1
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-9
pve-cluster: 6.1-8
pve-container: 3.1-11
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-11
pve-xtermjs: 4.3.0-1
qemu-server: 6.2-10
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1
Code:
pm11 ~ # ethtool -i ens3f0
driver: ixgbe
version: 5.1.0-k
firmware-version: 0x000161c1
expansion-rom-version:
bus-info: 0000:02:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
Code:
pm11 ~ # lspci | grep -i eth
02:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
02:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
03:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
03:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
Code:
pm11 ~ # cat /etc/network/interfaces
auto lo
iface lo inet loopback
#auto enp3s0f1
#iface enp3s0f1 inet static
# address 172.25.4.11
# netmask 255.255.255.0
# gateway 172.25.4.1
# 1G bond
auto bond1
iface bond1 inet manual
bond_mode 802.3ad
bond_miimon 100
bond_downdelay 200
bond_updelay 200
bond_slaves enp3s0f0 enp3s0f1
bond_xmit_hash_policy layer3+4
bond_lacp_rate fast
# 1G basic bridge
auto vmbr1
iface vmbr1 inet static
bridge_ports bond1
bridge_stp off
bridge_fd 0
address 0.0.0.0
# 1G hv vlan init
auto vmbr1v254
iface vmbr1v254 inet static
bridge_ports bond1.254
bridge_stp off
bridge_fd 0
address 0.0.0.0
# 1G hv adapter
auto hv1
iface hv1 inet static
pre-up ip link add link vmbr1v254 name hv1 type macvtap
pre-up ip link set hv1 address 1a:2b:3c:34:b7:01 up
post-down ip link del dev hv1
address 172.25.4.11/24
gateway 172.25.4.1
# 10G bond
auto bond0
iface bond0 inet manual
bond_mode 802.3ad
bond_miimon 100
bond_downdelay 200
bond_updelay 200
bond_slaves ens3f0 ens3f1
bond_xmit_hash_policy layer3+4
bond_lacp_rate fast
mtu 9032
# 10G basic bridge
auto vmbr0
iface vmbr0 inet static
bridge_ports bond0
bridge_stp off
bridge_fd 0
address 0.0.0.0
mtu 9032
# 10G hv ClusterNET vlan init
auto vmbr0v255
iface vmbr0v255 inet static
bridge_ports bond0.255
bridge_stp off
bridge_fd 0
address 0.0.0.0
mtu 9032
# 10G hv ClusterNET adapter
auto hv0
iface hv0 inet static
pre-up ip link add link vmbr0v255 name hv0 type macvtap
pre-up ip link set hv0 address 1a:2b:3c:53:9E:C4 up
post-down ip link del dev hv0
address 172.25.5.11/24
mtu 9032
# 10G hv BackupNET init
auto vmbr0v168
iface vmbr0v168 inet static
bridge_ports bond0.168
bridge_stp off
bridge_fd 0
address 0.0.0.0
mtu 1500
# 10G hv BackupNet adapter
auto hv2
iface hv2 inet static
pre-up ip link add link vmbr0v168 name hv2 type macvtap
pre-up ip link set hv2 address 1a:2b:3c:a3:10:ad up
post-down ip link del dev hv2
address 192.168.0.11/23
mtu 1500
yes im using jumbo frames for the ceph network
i think its a Firmware / Kernel Issue -> https://www.kernel.org/doc/Documentation/networking/segmentation-offloads.txt
As a workaround: Is it save to run PVE6 and CEPH 14 with the old Kernel ?
vmlinuz-4.15.18-30-pve ??