e1000 driver hang

1, Please, does anyone know what the command via ethtool, to read my current flags for tso and gso? I would like to know that any changes I make to /etc/network/interfaces on boot, are sticking on reboot.
ethtool -k %eno1% | grep offload

 
ethtool -k %eno1% | grep offload
I appreciate the reply, I googled earlier and also found ethtool -k <adaptername>, however it didn't work for me and your command isn't either :(

root@proxtestnew:~# ethtool -k %eno1% | grep offload
netlink error: no device matches name (offset 24)
netlink error: No such device

Despite the fact that this command DOES work ... (confused)
ethtool -K eno1 tso off gso off

Any thoughts?
 
I appreciate the reply, I googled earlier and also found ethtool -k <adaptername>, however it didn't work for me and your command isn't either :(



Despite the fact that this command DOES work ... (confused)
ethtool -K eno1 tso off gso off

Any thoughts?
show the output of the command ip a
and the networking config
 
Last edited:
show the output of the command ip a
and the networking config
root@proxtestnew:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
link/ether 38:ca:84:ac:cf:e5 brd ff:ff:ff:ff:ff:ff
altname enp0s31f6
3: wlp0s20f3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 7c:b5:66:ec:b0:85 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 38:ca:84:ac:cf:e5 brd ff:ff:ff:ff:ff:ff
inet IP ADDRESS/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::3aca:84ff:feac:cfe5/64 scope link
valid_lft forever preferred_lft forever
5: tap102i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr102i0 state UNKNOWN group default qlen 1000
link/ether 1e:7e:32:e2:8d:32 brd ff:ff:ff:ff:ff:ff
6: fwbr102i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 8e:d1:d0:b2:08:4b brd ff:ff:ff:ff:ff:ff
7: fwpr102p0@fwln102i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether d6:7a:0a:74:d7:a8 brd ff:ff:ff:ff:ff:ff
8: fwln102i0@fwpr102p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr102i0 state UP group default qlen 1000
link/ether 8e:d1:d0:b2:08:4b brd ff:ff:ff:ff:ff:ff
root@proxtestnew:~#




auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.0.140/24
gateway 192.168.0.254
bridge-ports eno1
bridge-stp off
bridge-fd 0

iface wlp0s20f3 inet manual


Thank you for the assistance.
(Yeah I haven't put the fix, in interfaces yet)
 
Last edited:

jaxjexjox

add networking

auto lo
iface lo inet loopback
iface eno1 inet manual
offload-gso off
offload-gro off
offload-tso off
offload-rx off
offload-tx off
offload-rxvlan off
offload-txvlan off
offload-sg off
offload-ufo off
offload-lro off

apply without rebooting
ethtool -K eno1 gso off gro off tso off tx off rx off rxvlan off txvlan off sg off

we check
ethtool -k eno1 | grep offload


You can also see the number of interface crashes
dmesg | grep 'Reset adapter unexpectedly' | wc -l
 
Last edited:

jaxjexjox

add networking

auto lo
iface lo inet loopback
iface eno1 inet manual
offload-gso off
offload-gro off
offload-tso off
offload-rx off
offload-tx off
offload-rxvlan off
offload-txvlan off
offload-sg off
offload-ufo off
offload-lro off

apply without rebooting
ethtool -K eno1 gso off gro off tso off tx off rx off rxvlan off txvlan off sg off

we check
ethtool -k eno1 | grep offload


You can also see the number of interface crashes
dmesg | grep 'Reset adapter unexpectedly' | wc -l

Thank you!

So my original command was:
ethtool -K eno1 tso off gso off

This still left the following on:
generic-receive-offload: on
rx-vlan-offload: on
tx-vlan-offload: on
With those 3 still on, my testing still passed.


Your command indeed turns the whole lot off, none the less I'll stick with yours, I assume the performance impact on a 6 core machine isn't too high.

Finally, when you say 'add networking' I take it you mean /etc/network/interfaces - add those lines?
(sorry, still learning!) and thank you, thank you all.
 
everything is correct
After rebooting, this is my result:

root@proxtestnew:~# ethtool -k eno1 | grep offload
tcp-segmentation-offload: on
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: on
tx-vlan-offload: on
l2-fwd-offload: off [fixed]
hw-tc-offload: off [fixed]
esp-hw-offload: off [fixed]
esp-tx-csum-hw-offload: off [fixed]
rx-udp_tunnel-port-offload: off [fixed]
tls-hw-tx-offload: off [fixed]
tls-hw-rx-offload: off [fixed]
macsec-hw-offload: off [fixed]
hsr-tag-ins-offload: off [fixed]
hsr-tag-rm-offload: off [fixed]
hsr-fwd-offload: off [fixed]
hsr-dup-offload: off [fixed]
root@proxtestnew:~#


Here's the contents of /etc/network/interfaces, I take it I've done something wrong?

auto lo
iface lo inet loopback

iface eno1 inet manual
offload-gso off
offload-gro off
offload-tso off
offload-rx off
offload-tx off
offload-rxvlan off
offload-txvlan off
offload-sg off
offload-ufo off
offload-lro off

auto vmbr0
iface vmbr0 inet static
address IP/24
gateway IP
bridge-ports eno1
bridge-stp off
bridge-fd 0

iface wlp0s20f3 inet manual


source /etc/network/interfaces.d/*
 
Last edited:
Ok thanks for this, I have ended up using the post-up command in the interfaces file and that's ended up causing it to stick on reboot.

Thanks all for the help, hopefully there's a longer term solution for this which can help others, ideally they find this thread before they spend a long, long time diagnosing the issues.
 
I have now read though almost everything but i keep getting stuck when trying to run the ethtool command

Below is the output of ip a
Bash:
root@pve:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 48:4d:7e:d0:27:5b brd ff:ff:ff:ff:ff:ff
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 48:4d:7e:d0:27:5b brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.211/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::4a4d:7eff:fed0:275b/64 scope link
       valid_lft forever preferred_lft forever
4: vmbr1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 92:3f:e9:05:5c:1b brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.1/24 scope global vmbr1
       valid_lft forever preferred_lft forever
5: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr101i0 state UNKNOWN group default qlen 1000
    link/ether 92:11:6a:7e:1c:47 brd ff:ff:ff:ff:ff:ff
6: fwbr101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a2:cd:e2:50:93:ca brd ff:ff:ff:ff:ff:ff
7: fwpr101p0@fwln101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
So i have manged to figure out that it is on "enp0s25" i need to to disable tso and gso
But when i run ethtool -K enp0s25 tso off gso off , i get this error message
Bash:
root@pve:~# ethtool -K enp0s25 tsp off gso off
netlink error: bit name not found (offset 56)
netlink error: Operation not supported

I have tried to google it but had no success

I'm running proxmox 5.15.143-1-pve,
My ethernet controller:
00:19.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection I217-LM [8086:153a] (rev 05)
Subsystem: Dell Ethernet Connection I217-LM [1028:0617]
Kernel driver in use: e1000e
Kernel modules: e1000e

(yes i have tried adding
iface enp0s25 inet manual
post-up /usr/bin/logger -p debug -t ifup "Disabling segmentation offload for eno1" && /sbin/ethtool -K $IFACE tso off gso off && /usr/bin/logger -p debug -t ifup "Disabled offload for eno1"). But it just spits out the same error)
 
But when i run ethtool -K enp0s25 tso off gso off , i get this error message
Bash:
root@pve:~# ethtool -K enp0s25 tsp off gso off
netlink error: bit name not found (offset 56)
netlink error: Operation not supported
show output ethtool -K enp0s25 |grep tsp
 
Thank you all. Having the same issue on Proxmox v8. Network interface keeps hanging every couple of minutes under heavy load. Resolved with a command below (took effect immediately without a reboot)

ethtool -K eno1 gso off tso off

Also for persistence across reboots I added this to my /etc/network/interfaces underneath iface eno1 inet manual line so it looks like this
iface eno1 inet manual
post-up /usr/bin/logger -p debug -t ifup "Disabling segmentation offload for eno1" && /sbin/ethtool -K $IFACE tso off gso off && /usr/bin/logger -p debug -t ifup "Disabled offload for eno1"

Friendly reminder: when copy pasting the above remember to double check that your interface name is also eno1. If not then modify the code accordingly.
 
Last edited:
  • Like
Reactions: Hyacin
Register a special account to inquire about the latest solution to this problem.
The current situation on my side is
  • You can see that the network card model is I219-V
Code:
lspci -nn  |grep Ethernet
00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (14) I219-V [8086:15fa] (rev 11)
  • You can see the PVE kernel version
Code:
 uname -a
Linux pve15 6.8.12-1-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-1 (2024-08-05T16:17Z) x86_64 GNU/Linux
  • You can see the PVE version
Code:
 pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.12-1-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-1
proxmox-kernel-6.8.12-1-pve-signed: 6.8.12-1
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx9
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.2
libpve-guest-common-perl: 5.1.4
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.5.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.3
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.13-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.2-2
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.4
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1
I tried using the methods given above, such as
  • Add to the network card configuration file
Code:
 post-up /usr/bin/logger -p debug -t ifup "Disabling segmentation offload for {{ DRIVER_NAME }}" && /sbin/ethtool -K $IFACE tso off gso off && /usr/bin/logger -p debug -t ifup "Disabled offload for eno1"
  • Execute commands manually
Code:
ethtool -K {{ DRIVER_NAME }} tso off gso off

The problem now is not to solve the problem of E1000 network card, but according to the above settings, whether adding the network card configuration file or manually entering the command to execute, it will cause the virtual machine on this host to be unable to ping. However, the console web can still access the virtual machine.
I don’t know how to solve this problem now, please help me.
 
Register a special account to inquire about the latest solution to this problem.
The current situation on my side is
  • You can see that the network card model is I219-V
Code:
lspci -nn  |grep Ethernet
00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (14) I219-V [8086:15fa] (rev 11)
  • You can see the PVE kernel version
Code:
 uname -a
Linux pve15 6.8.12-1-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-1 (2024-08-05T16:17Z) x86_64 GNU/Linux
  • You can see the PVE version
Code:
 pveversion -v
proxmox-ve: 8.2.0 (running kernel: 6.8.12-1-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-1
proxmox-kernel-6.8.12-1-pve-signed: 6.8.12-1
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx9
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.2
libpve-guest-common-perl: 5.1.4
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.5.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.3
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.13-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.2-2
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.4
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1
I tried using the methods given above, such as
  • Add to the network card configuration file
Code:
 post-up /usr/bin/logger -p debug -t ifup "Disabling segmentation offload for {{ DRIVER_NAME }}" && /sbin/ethtool -K $IFACE tso off gso off && /usr/bin/logger -p debug -t ifup "Disabled offload for eno1"
  • Execute commands manually
Code:
ethtool -K {{ DRIVER_NAME }} tso off gso off

The problem now is not to solve the problem of E1000 network card, but according to the above settings, whether adding the network card configuration file or manually entering the command to execute, it will cause the virtual machine on this host to be unable to ping. However, the console web can still access the virtual machine.
I don’t know how to solve this problem now, please help me.
I solved my problem using the methods listed at
Code:
https://forum.proxmox.com/threads/e1000-driver-hang.58284/post-390709
No problems have occurred after 5 days of observation.
This method was used before, but there was no solution.
But this time, I confirmed that the problem has basically been solved
 
I was having this issue with the interface being reset all the time under heavy load.

Here is the error:

Code:
[Fri May 14 23:55:54 2021] ------------[ cut here ]------------
[Fri May 14 23:55:54 2021] NETDEV WATCHDOG: eth0 (e1000e): transmit queue 0 timed out
[Fri May 14 23:55:54 2021] WARNING: CPU: 12 PID: 0 at net/sched/sch_generic.c:448 dev_watchdog+0x264/0x270
[Fri May 14 23:55:54 2021] Modules linked in: veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw softdog ip6table_mangle ip6table_filter ip6_tables xt_conntrack xt_tcpudp xt_nat xt_MASQUERADE iptable_nat nf_nat nfnetlink_log bpfilter nfnetlink intel_rapl_msr intel_rapl_common x86_pkg_temp_thermal intel_powerclamp kvm_intel kvm irqbypass rapl intel_cstate input_leds serio_raw wmi_bmof intel_wmi_thunderbolt intel_pch_thermal acpi_pad mac_hid vhost_net vhost tap coretemp sunrpc autofs4 btrfs zstd_compress dm_crypt raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid0 multipath linear xt_comment xt_recent xt_connlimit nf_conncount xt_state nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c xt_length xt_hl xt_tcpmss xt_TCPMSS ipt_REJECT nf_reject_ipv4 xt_dscp xt_multiport xt_limit iptable_mangle iptable_filter ip_tables x_tables bfq raid1 crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel crypto_simd cryptd glue_helper ahci xhci_pci e1000e i2c_i801
[Fri May 14 23:55:54 2021]  libahci xhci_hcd wmi video pinctrl_cannonlake pinctrl_intel
[Fri May 14 23:55:54 2021] CPU: 12 PID: 0 Comm: swapper/12 Not tainted 5.4.114-1-pve #1
[Fri May 14 23:55:54 2021] Hardware name: Gigabyte Technology Co., Ltd. B360 HD3P-LM/B360HD3PLM-CF, BIOS F4 HZ 04/30/2019
[Fri May 14 23:55:54 2021] RIP: 0010:dev_watchdog+0x264/0x270
[Fri May 14 23:55:54 2021] Code: 48 85 c0 75 e6 eb a0 4c 89 ef c6 05 80 c8 ef 00 01 e8 20 b8 fa ff 89 d9 4c 89 ee 48 c7 c7 98 5c c3 92 48 89 c2 e8 c5 56 15 00 <0f> 0b eb 82 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 41
[Fri May 14 23:55:54 2021] RSP: 0018:ffff9decc03d8e58 EFLAGS: 00010282
[Fri May 14 23:55:54 2021] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000000083f
[Fri May 14 23:55:54 2021] RDX: 0000000000000000 RSI: 00000000000000f6 RDI: 000000000000083f
[Fri May 14 23:55:54 2021] RBP: ffff9decc03d8e88 R08: 00000000000003a4 R09: ffffffff9339e768
[Fri May 14 23:55:54 2021] R10: 0000000000000774 R11: ffff9decc03d8cb0 R12: 0000000000000001
[Fri May 14 23:55:54 2021] R13: ffff925deb2a8000 R14: ffff925deb2a8480 R15: ffff925deb1ee880
[Fri May 14 23:55:54 2021] FS:  0000000000000000(0000) GS:ffff925dff300000(0000) knlGS:0000000000000000
[Fri May 14 23:55:54 2021] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[Fri May 14 23:55:54 2021] CR2: 00007f38443ebbc8 CR3: 0000000e649e6003 CR4: 00000000003606e0
[Fri May 14 23:55:54 2021] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[Fri May 14 23:55:54 2021] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[Fri May 14 23:55:54 2021] Call Trace:
[Fri May 14 23:55:54 2021]  <IRQ>
[Fri May 14 23:55:54 2021]  ? pfifo_fast_enqueue+0x160/0x160
[Fri May 14 23:55:54 2021]  call_timer_fn+0x32/0x130
[Fri May 14 23:55:54 2021]  run_timer_softirq+0x1a5/0x430
[Fri May 14 23:55:54 2021]  ? ktime_get+0x3c/0xa0
[Fri May 14 23:55:54 2021]  ? lapic_next_deadline+0x2c/0x40
[Fri May 14 23:55:54 2021]  ? clockevents_program_event+0x93/0xf0
[Fri May 14 23:55:54 2021]  __do_softirq+0xdc/0x2d4
[Fri May 14 23:55:54 2021]  irq_exit+0xa9/0xb0
[Fri May 14 23:55:54 2021]  smp_apic_timer_interrupt+0x79/0x130
[Fri May 14 23:55:54 2021]  apic_timer_interrupt+0xf/0x20
[Fri May 14 23:55:54 2021]  </IRQ>
[Fri May 14 23:55:54 2021] RIP: 0010:cpuidle_enter_state+0xbd/0x450
[Fri May 14 23:55:54 2021] Code: ff e8 b7 79 88 ff 80 7d c7 00 74 17 9c 58 0f 1f 44 00 00 f6 c4 02 0f 85 63 03 00 00 31 ff e8 ba 81 8e ff fb 66 0f 1f 44 00 00 <45> 85 ed 0f 88 8d 02 00 00 49 63 cd 48 8b 75 d0 48 2b 75 c8 48 8d
[Fri May 14 23:55:54 2021] RSP: 0018:ffff9decc0147e48 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
[Fri May 14 23:55:54 2021] RAX: ffff925dff32ae00 RBX: ffffffff92f57c40 RCX: 000000000000001f
[Fri May 14 23:55:54 2021] RDX: 000002c9a813f813 RSI: 00000000238e3d6b RDI: 0000000000000000
[Fri May 14 23:55:54 2021] RBP: ffff9decc0147e88 R08: 0000000000000002 R09: 000000000002a680
[Fri May 14 23:55:54 2021] R10: 00000a21d04c5df8 R11: ffff925dff329aa0 R12: ffffbdecbfd16f08
[Fri May 14 23:55:54 2021] R13: 0000000000000001 R14: ffffffff92f57cb8 R15: ffffffff92f57ca0
[Fri May 14 23:55:54 2021]  ? cpuidle_enter_state+0x99/0x450
[Fri May 14 23:55:54 2021]  cpuidle_enter+0x2e/0x40
[Fri May 14 23:55:54 2021]  call_cpuidle+0x23/0x40
[Fri May 14 23:55:54 2021]  do_idle+0x22c/0x270
[Fri May 14 23:55:54 2021]  cpu_startup_entry+0x1d/0x20
[Fri May 14 23:55:54 2021]  start_secondary+0x166/0x1c0
[Fri May 14 23:55:54 2021]  secondary_startup_64+0xa4/0xb0
[Fri May 14 23:55:54 2021] ---[ end trace ab9792688d4e93f4 ]---
[Fri May 14 23:55:54 2021] e1000e 0000:00:1f.6 eth0: Reset adapter unexpectedly
[Fri May 14 23:56:00 2021] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[Fri May 14 23:58:08 2021] e1000e 0000:00:1f.6 eth0: Reset adapter unexpectedly
[Fri May 14 23:58:13 2021] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[Sat May 15 00:08:17 2021] e1000e 0000:00:1f.6 eth0: Reset adapter unexpectedly
[Sat May 15 00:08:22 2021] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[Sat May 15 00:08:33 2021] e1000e 0000:00:1f.6 eth0: Reset adapter unexpectedly

It happens on kernels:

* Linux version 5.4.114-1-pve (build@proxmox) (gcc version 8.3.0 (Debian 8.3.0-6)) #1 SMP PVE 5.4.114-1 (Sun, 09 May 2021 17:13:05 +0200) ()
* Linux version 5.11.7-1-pve (build@pve) (gcc (Debian 8.3.0-6) 8.3.0, GNU ld (GNU Binutils for Debian) 2.31.1) #1 SMP PVE 5.11.7-1~bpo10 (Thu, 18 Mar 2021 16:17:24 +0100) ()

I have this NIC:
Code:
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-LM (rev 10)

but it might happen as well on any other one related.

I've tried settings various kernel options in /etc/default/grub, e.g.:
Code:
pcie_aspm=off
but it didn't help.

The only workaround here is (replace eth0 with your interface name):

Code:
apt install -y ethtool
ethtool -K eth0 gso off gro off tso off tx off rx off rxvlan off txvlan off sg off

to make this permanent just add this into your /etc/network/interfaces:
Code:
auto eth0
iface eth0 inet static
  offload-gso off
  offload-gro off
  offload-tso off
  offload-rx off
  offload-tx off
  offload-rxvlan off
  offload-txvlan off
  offload-sg off
  offload-ufo off
  offload-lro off
  address x.x.x.x
  netmask a.a.a.a
  gateway z.z.z.z

NOTE: only disabling tso or gso doesn't help in my case I had to disable all offloading!
Thanks to your method, I used your method to solve my problem.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!