e1000 driver hang

That's strange - did you have the issue before you upgraded? I previously didn't have the issue on 6.8.12-8-pve, had it frequently (few times a day) after the upgrade to 6.8.12-9-pve, and now that I've pinned 6.8.12-8-pve it's back to being stable.

Is it possible it's a different issue? Have you confirmed you've successfully used the pinned kernel? Could you try roll back to the proxmox version that was stable for you last?
Haven't had the issue before upgrading nope. Yep 6.8.12-8 was 100% pinned. I do doubt it is a different issue given it happened after updating, but I guess not 100% ruled out.
I wonder if it is related to the PVE version or other pve packages like pve-firmware version, in addition to the kernel version?
 
I've noticed that even after pinning 6.8.12-8-pve I still get the error in the logs, but the hosts no longer restart when it happens. There does seem to be a network pause when the logs show, which isn't ideal.

I've tried the tso off change to see if that removes the errors completely.
 
just to add my 2c over this problem.
kernel: e1000e 0000:00:1f.6 eno1: Detected Hardware Unit Hang:

My HP EliteDesk 800 G4 DM 35W (TAA) is the one having the problem, while my HP EliteDesk 800 G5 Desktop Mini is working correctly.

Code:
pve-manager/8.4.1/2a5fa54a8503f96d (running kernel: 6.8.12-9-pve)
Linux pve-02 6.8.12-9-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-9 (2025-03-16T19:18Z) x86_64 GNU/Linux

Code:
sudo dmidecode -s system-product-name
HP EliteDesk 800 G4 DM 35W (TAA)
Code:
00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (7) I219-LM [8086:15bb] (rev 10)
    DeviceName: Onboard Lan
    Subsystem: Hewlett-Packard Company Ethernet Connection (7) I219-LM [103c:83e2]
    Flags: bus master, fast devsel, latency 0, IRQ 132, IOMMU group 8
    Memory at b1200000 (32-bit, non-prefetchable) [size=128K]
    Capabilities: [c8] Power Management version 3
    Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+
    Kernel driver in use: e1000e
    Kernel modules: e1000e
Code:
sudo dmidecode -s system-product-name
HP EliteDesk 800 G5 Desktop Mini
Code:
00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (7) I219-LM [8086:15bb] (rev 10)
    DeviceName: Onboard Lan
    Subsystem: Hewlett-Packard Company Ethernet Connection (7) I219-LM [103c:8595]
    Flags: bus master, fast devsel, latency 0, IRQ 125, IOMMU group 8
    Memory at e1200000 (32-bit, non-prefetchable) [size=128K]
    Capabilities: [c8] Power Management version 3
    Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+
    Kernel driver in use: e1000e
    Kernel modules: e1000e


what are the last numbers here ?
Code:
    Subsystem: Hewlett-Packard Company Ethernet Connection (7) I219-LM [103c:83e2]
    Subsystem: Hewlett-Packard Company Ethernet Connection (7) I219-LM [103c:8595]
 
I was having this issue with the interface being reset all the time under heavy load.

Here is the error:

Code:
[Fri May 14 23:55:54 2021] ------------[ cut here ]------------
[Fri May 14 23:55:54 2021] NETDEV WATCHDOG: eth0 (e1000e): transmit queue 0 timed out
[Fri May 14 23:55:54 2021] WARNING: CPU: 12 PID: 0 at net/sched/sch_generic.c:448 dev_watchdog+0x264/0x270
[Fri May 14 23:55:54 2021] Modules linked in: veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw softdog ip6table_mangle ip6table_filter ip6_tables xt_conntrack xt_tcpudp xt_nat xt_MASQUERADE iptable_nat nf_nat nfnetlink_log bpfilter nfnetlink intel_rapl_msr intel_rapl_common x86_pkg_temp_thermal intel_powerclamp kvm_intel kvm irqbypass rapl intel_cstate input_leds serio_raw wmi_bmof intel_wmi_thunderbolt intel_pch_thermal acpi_pad mac_hid vhost_net vhost tap coretemp sunrpc autofs4 btrfs zstd_compress dm_crypt raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq raid0 multipath linear xt_comment xt_recent xt_connlimit nf_conncount xt_state nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c xt_length xt_hl xt_tcpmss xt_TCPMSS ipt_REJECT nf_reject_ipv4 xt_dscp xt_multiport xt_limit iptable_mangle iptable_filter ip_tables x_tables bfq raid1 crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel crypto_simd cryptd glue_helper ahci xhci_pci e1000e i2c_i801
[Fri May 14 23:55:54 2021]  libahci xhci_hcd wmi video pinctrl_cannonlake pinctrl_intel
[Fri May 14 23:55:54 2021] CPU: 12 PID: 0 Comm: swapper/12 Not tainted 5.4.114-1-pve #1
[Fri May 14 23:55:54 2021] Hardware name: Gigabyte Technology Co., Ltd. B360 HD3P-LM/B360HD3PLM-CF, BIOS F4 HZ 04/30/2019
[Fri May 14 23:55:54 2021] RIP: 0010:dev_watchdog+0x264/0x270
[Fri May 14 23:55:54 2021] Code: 48 85 c0 75 e6 eb a0 4c 89 ef c6 05 80 c8 ef 00 01 e8 20 b8 fa ff 89 d9 4c 89 ee 48 c7 c7 98 5c c3 92 48 89 c2 e8 c5 56 15 00 <0f> 0b eb 82 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 41
[Fri May 14 23:55:54 2021] RSP: 0018:ffff9decc03d8e58 EFLAGS: 00010282
[Fri May 14 23:55:54 2021] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000000083f
[Fri May 14 23:55:54 2021] RDX: 0000000000000000 RSI: 00000000000000f6 RDI: 000000000000083f
[Fri May 14 23:55:54 2021] RBP: ffff9decc03d8e88 R08: 00000000000003a4 R09: ffffffff9339e768
[Fri May 14 23:55:54 2021] R10: 0000000000000774 R11: ffff9decc03d8cb0 R12: 0000000000000001
[Fri May 14 23:55:54 2021] R13: ffff925deb2a8000 R14: ffff925deb2a8480 R15: ffff925deb1ee880
[Fri May 14 23:55:54 2021] FS:  0000000000000000(0000) GS:ffff925dff300000(0000) knlGS:0000000000000000
[Fri May 14 23:55:54 2021] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[Fri May 14 23:55:54 2021] CR2: 00007f38443ebbc8 CR3: 0000000e649e6003 CR4: 00000000003606e0
[Fri May 14 23:55:54 2021] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[Fri May 14 23:55:54 2021] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[Fri May 14 23:55:54 2021] Call Trace:
[Fri May 14 23:55:54 2021]  <IRQ>
[Fri May 14 23:55:54 2021]  ? pfifo_fast_enqueue+0x160/0x160
[Fri May 14 23:55:54 2021]  call_timer_fn+0x32/0x130
[Fri May 14 23:55:54 2021]  run_timer_softirq+0x1a5/0x430
[Fri May 14 23:55:54 2021]  ? ktime_get+0x3c/0xa0
[Fri May 14 23:55:54 2021]  ? lapic_next_deadline+0x2c/0x40
[Fri May 14 23:55:54 2021]  ? clockevents_program_event+0x93/0xf0
[Fri May 14 23:55:54 2021]  __do_softirq+0xdc/0x2d4
[Fri May 14 23:55:54 2021]  irq_exit+0xa9/0xb0
[Fri May 14 23:55:54 2021]  smp_apic_timer_interrupt+0x79/0x130
[Fri May 14 23:55:54 2021]  apic_timer_interrupt+0xf/0x20
[Fri May 14 23:55:54 2021]  </IRQ>
[Fri May 14 23:55:54 2021] RIP: 0010:cpuidle_enter_state+0xbd/0x450
[Fri May 14 23:55:54 2021] Code: ff e8 b7 79 88 ff 80 7d c7 00 74 17 9c 58 0f 1f 44 00 00 f6 c4 02 0f 85 63 03 00 00 31 ff e8 ba 81 8e ff fb 66 0f 1f 44 00 00 <45> 85 ed 0f 88 8d 02 00 00 49 63 cd 48 8b 75 d0 48 2b 75 c8 48 8d
[Fri May 14 23:55:54 2021] RSP: 0018:ffff9decc0147e48 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
[Fri May 14 23:55:54 2021] RAX: ffff925dff32ae00 RBX: ffffffff92f57c40 RCX: 000000000000001f
[Fri May 14 23:55:54 2021] RDX: 000002c9a813f813 RSI: 00000000238e3d6b RDI: 0000000000000000
[Fri May 14 23:55:54 2021] RBP: ffff9decc0147e88 R08: 0000000000000002 R09: 000000000002a680
[Fri May 14 23:55:54 2021] R10: 00000a21d04c5df8 R11: ffff925dff329aa0 R12: ffffbdecbfd16f08
[Fri May 14 23:55:54 2021] R13: 0000000000000001 R14: ffffffff92f57cb8 R15: ffffffff92f57ca0
[Fri May 14 23:55:54 2021]  ? cpuidle_enter_state+0x99/0x450
[Fri May 14 23:55:54 2021]  cpuidle_enter+0x2e/0x40
[Fri May 14 23:55:54 2021]  call_cpuidle+0x23/0x40
[Fri May 14 23:55:54 2021]  do_idle+0x22c/0x270
[Fri May 14 23:55:54 2021]  cpu_startup_entry+0x1d/0x20
[Fri May 14 23:55:54 2021]  start_secondary+0x166/0x1c0
[Fri May 14 23:55:54 2021]  secondary_startup_64+0xa4/0xb0
[Fri May 14 23:55:54 2021] ---[ end trace ab9792688d4e93f4 ]---
[Fri May 14 23:55:54 2021] e1000e 0000:00:1f.6 eth0: Reset adapter unexpectedly
[Fri May 14 23:56:00 2021] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[Fri May 14 23:58:08 2021] e1000e 0000:00:1f.6 eth0: Reset adapter unexpectedly
[Fri May 14 23:58:13 2021] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[Sat May 15 00:08:17 2021] e1000e 0000:00:1f.6 eth0: Reset adapter unexpectedly
[Sat May 15 00:08:22 2021] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[Sat May 15 00:08:33 2021] e1000e 0000:00:1f.6 eth0: Reset adapter unexpectedly

It happens on kernels:

* Linux version 5.4.114-1-pve (build@proxmox) (gcc version 8.3.0 (Debian 8.3.0-6)) #1 SMP PVE 5.4.114-1 (Sun, 09 May 2021 17:13:05 +0200) ()
* Linux version 5.11.7-1-pve (build@pve) (gcc (Debian 8.3.0-6) 8.3.0, GNU ld (GNU Binutils for Debian) 2.31.1) #1 SMP PVE 5.11.7-1~bpo10 (Thu, 18 Mar 2021 16:17:24 +0100) ()

I have this NIC:
Code:
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-LM (rev 10)

but it might happen as well on any other one related.

I've tried settings various kernel options in /etc/default/grub, e.g.:
Code:
pcie_aspm=off
but it didn't help.

The only workaround here is (replace eth0 with your interface name):

Code:
apt install -y ethtool
ethtool -K eth0 gso off gro off tso off tx off rx off rxvlan off txvlan off sg off

to make this permanent just add this into your /etc/network/interfaces:
Code:
auto eth0
iface eth0 inet static
  offload-gso off
  offload-gro off
  offload-tso off
  offload-rx off
  offload-tx off
  offload-rxvlan off
  offload-txvlan off
  offload-sg off
  offload-ufo off
  offload-lro off
  address x.x.x.x
  netmask a.a.a.a
  gateway z.z.z.z

NOTE: only disabling tso or gso doesn't help in my case I had to disable all offloading!


This was a plague for a few days and it took awhile for me to track this post down. Much appreciated to have a stable system after so many obnoxious intermittent crashes.
 
Adding my experience here...

I've a Dell 7060 and Lenovo M720q running PVE 8.4.1 with Linux 6.8.12-9-pve.

The Lenovo has an Intel I219-V (rev 10) while the Dell has an Intel I219-LM (rev 10).

The Dell has been fine but the Lenovo has fallen off the network a few times. Disconnecting and reconnecting the network cable got it back on the network.

Knock on wood, running the following command seems to have resolved the issue for me.

Code:
ethtool -K eno1 gso off gro off tso off tx off rx off rxvlan off txvlan off sg off
 
New Kernel version 6.8.12-10 was released. Preliminary first backup test succeeded and I had no freeze of the PVE.
There is also a difference which may play a role - I moved PBS from a VM to a dedicated HW. So far so good. I'll see in a few days if it's stable.
 
From the Change log:
proxmox-kernel-6.8 (6.8.12-10) bookworm; urgency=medium
* cherry-pick "bnxt_en: Fix GSO type for HW GRO packets on 5750X chips".
* update source and patches to Ubuntu-6.8.0-60.63
 
The issue still occurs on 6.8.12-9 - I am currently away from my premises and cannot check due to the machine being offline. Any 'higher' network traffic - eg. the speedtest in YABS for example - kills the network access of the machine.

The networking adapter in my machine is an Intel I218LM.

The command mentioned above (ethtool -K etc etc etc) does not seem to have resolved the issue, unless I'm not supposed to restart the node host after running it.

I have not yet tried the -10 kernel as I was unaware of its release. I'll try it later on and post my findings.
 
The issue still occurs on 6.8.12-9 - I am currently away from my premises and cannot check due to the machine being offline. Any 'higher' network traffic - eg. the speedtest in YABS for example - kills the network access of the machine.

The networking adapter in my machine is an Intel I218LM.

The command mentioned above (ethtool -K etc etc etc) does not seem to have resolved the issue, unless I'm not supposed to restart the node host after running it.

I have not yet tried the -10 kernel as I was unaware of its release. I'll try it later on and post my findings.
The ethtool command is not persistent unless you add it to the /etc/network/interfaces file
 
The ethtool command is not persistent unless you add it to the /etc/network/interfaces file
Thank you for sharing this with me.

I will, therefore, once I get to the premises, test the stability with the ethtool change and report.

I'll also try the new kernel once I 'verify' whether the ethtool changes actually have an effect on my setup.
 
  • Like
Reactions: SelfMan
The new kernel does not help with the network hang issues.

ethtool -K eno1 gso off gro off tso off tx off rx off rxvlan off txvlan off sg off
This command has resolved all of my issues and my Z440 host is now able to have VMs with any significant web travel without the drivers hanging, including on 6.8.12-10-pve. Great news, massive respect to whoever came up with this.
 
I'm also affected by this error since end of February.

My NIC
Code:
#lspci -v | grep Ethernet
00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (Lewisville) (rev 04)

Example hang message:
Code:
2025-04-24T22:30:25.002276+02:00 scooter kernel: [35254.383431] e1000e 0000:00:19.0 eno1: Detected Hardware Unit Hang:
2025-04-24T22:30:25.002293+02:00 scooter kernel: [35254.383431]   TDH                  <4d>
2025-04-24T22:30:25.002295+02:00 scooter kernel: [35254.383431]   TDT                  <7a>
2025-04-24T22:30:25.002296+02:00 scooter kernel: [35254.383431]   next_to_use          <7a>
2025-04-24T22:30:25.002296+02:00 scooter kernel: [35254.383431]   next_to_clean        <4c>
2025-04-24T22:30:25.002297+02:00 scooter kernel: [35254.383431] buffer_info[next_to_clean]:
2025-04-24T22:30:25.002298+02:00 scooter kernel: [35254.383431]   time_stamp           <1021531a1>
2025-04-24T22:30:25.002299+02:00 scooter kernel: [35254.383431]   next_to_watch        <4d>
2025-04-24T22:30:25.002300+02:00 scooter kernel: [35254.383431]   jiffies              <102155e00>
2025-04-24T22:30:25.002310+02:00 scooter kernel: [35254.383431]   next_to_watch.status <0>
2025-04-24T22:30:25.002311+02:00 scooter kernel: [35254.383431] MAC Status             <80083>
2025-04-24T22:30:25.002312+02:00 scooter kernel: [35254.383431] PHY Status             <796d>
2025-04-24T22:30:25.002313+02:00 scooter kernel: [35254.383431] PHY 1000BASE-T Status  <7800>
2025-04-24T22:30:25.002314+02:00 scooter kernel: [35254.383431] PHY Extended Status    <3000>
2025-04-24T22:30:25.002315+02:00 scooter kernel: [35254.383431] PCI Status             <10>
2025-04-24T22:30:25.661535+02:00 scooter pvestatd[951]: storage 'diskstation' is not online

In my case the following offload options were enabled (before changing something)
Code:
# ethtool -k eno1 | grep 'offload.*on'
tcp-segmentation-offload: on
generic-segmentation-offload: on
generic-receive-offload: on
rx-vlan-offload: on
tx-vlan-offload: on

For now I decide to switch off tso and gso within /etc/network/interfaces:
Code:
auto lo
iface lo inet loopback

iface eno1 inet manual
        post-up /sbin/ethtool -K eno1 tso off gso off

auto vmbr0
iface vmbr0 inet static
        address 192.168.0.100
        netmask 255.255.255.0
        gateway 192.168.0.1
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0

Can someone answer my following questions?

1. Are there any known insights that could explain why the error has only been occurring for me since February, even though my hardware setup has remained unchanged for years? Were there any relevant kernel changes recently?

2. Can someone answer if I also should switch off the others enabled offload options, meaning gro, rx, rxvlan, tx and txvlan, as mentioned in some threads?

3. Has someone found a way to reliably reproduce the error? I have run multiple backups and also tried several runs with iperf3, but can not reliable reproduce the error.
 
I'm also affected by this error since end of February.
I have as well, It's been driving me insane. Every two hours it seems I have to go reboot my server because of an e1000e hang from my motherboard's ethernet port. I have a lot of USB devices and external HDDs plugged into the MB, so I figure the lanes are just getting too congested with everything.

Having ran into boot issues on top of this, I decided I'd just reset CMOS and reinstalled Proxmox (latest, 8.4 at this time) from scratch. I bought a $13 2.5 Gbps NIC PCI-E card from Amazon and set that as the main ethernet interface during setup.

https://www.amazon.com/dp/B0BLN82WQ4

I have restored almost half a terabyte so far over the network from my proxmox backup instance without issues. I'll come back after my NAS VM and GPU-passthrough VMs are up and stressing the system. I'll try to do some big network copies too to stress it into crashing.

EDIT: forgot to mention, I did try the
Code:
ethtool -K eno1 gso off gro off tso off tx off rx off rxvlan off txvlan off sg off
command, but it didn't fix my e1000e issue
 
Last edited:
I have as well, It's been driving me insane. Every two hours it seems I have to go reboot my server because of an e1000e hang from my motherboard's ethernet port. I have a lot of USB devices and external HDDs plugged into the MB, so I figure the lanes are just getting too congested with everything.

Having ran into boot issues on top of this, I decided I'd just reset CMOS and reinstalled Proxmox (latest, 8.4 at this time) from scratch. I bought a $13 2.5 Gbps NIC PCI-E card from Amazon and set that as the main ethernet interface during setup.

https://www.amazon.com/dp/B0BLN82WQ4

I have restored almost half a terabyte so far over the network from my proxmox backup instance without issues. I'll come back after my NAS VM and GPU-passthrough VMs are up and stressing the system. I'll try to do some big network copies too to stress it into crashing.

EDIT: forgot to mention, I did try the
Code:
ethtool -K eno1 gso off gro off tso off tx off rx off rxvlan off txvlan off sg off
command, but it didn't fix my e1000e issue
Rock-solid stable so far! It's been roughly a day and I haven't experienced any stability issues.

I know this isn't the ideal solution for people looking to remedy their e1000 errors, but there's just something well under-the-hood that isn't working with that particular driver. $13 and a pci-e slot was well worth saving all of that frustration.

Unfortunately I don't remember what version of Proxmox I had running on the machine prior, but it's possible that the reinstall was part of - or was - the fix.
 
I’m considering using the network card issue as an ‘excuse’ to get new hardware. Right now, I’m thinking about a reasonably recent Lenovo Tiny. Can anyone tell me from which Intel network card model onward the problem no longer occurs?
 
Rock-solid stable so far! It's been roughly a day and I haven't experienced any stability issues.

I know this isn't the ideal solution for people looking to remedy their e1000 errors, but there's just something well under-the-hood that isn't working with that particular driver. $13 and a pci-e slot was well worth saving all of that frustration.

Unfortunately I don't remember what version of Proxmox I had running on the machine prior, but it's possible that the reinstall was part of - or was - the fix.
Follow-up: I've been copying hundreds of GBs across the network and USB devices for the past few days, and it's all good still.
 
From the Change log:
proxmox-kernel-6.8 (6.8.12-10) bookworm; urgency=medium
* cherry-pick "bnxt_en: Fix GSO type for HW GRO packets on 5750X chips".
* update source and patches to Ubuntu-6.8.0-60.63

This new kernel solve my problem.

Code:
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-LM (rev 10)

no more ethtool neither apci fixes.
 
  • Like
Reactions: SelfMan
Code:
May 04 10:05:15 pve-02 kernel: e1000e 0000:00:1f.6 eno1: Detected Hardware Unit Hang:
                                 TDH                  <a2>
                                 TDT                  <b2>
                                 next_to_use          <b2>
                                 next_to_clean        <a1>
                               buffer_info[next_to_clean]:
                                 time_stamp           <12c0ea9cc>
                                 next_to_watch        <a2>
                                 jiffies              <12c0eaec0>
                                 next_to_watch.status <0>
                               MAC Status             <80083>
                               PHY Status             <796d>
                               PHY 1000BASE-T Status  <7c00>
                               PHY Extended Status    <3000>

after 8 days the error come back =(
 
Code:
May 04 10:05:15 pve-02 kernel: e1000e 0000:00:1f.6 eno1: Detected Hardware Unit Hang:
                                 TDH                  <a2>
                                 TDT                  <b2>
                                 next_to_use          <b2>
                                 next_to_clean        <a1>
                               buffer_info[next_to_clean]:
                                 time_stamp           <12c0ea9cc>
                                 next_to_watch        <a2>
                                 jiffies              <12c0eaec0>
                                 next_to_watch.status <0>
                               MAC Status             <80083>
                               PHY Status             <796d>
                               PHY 1000BASE-T Status  <7c00>
                               PHY Extended Status    <3000>

after 8 days the error come back =(
Same here.
Did you applied the suggested fixes via ethtool?
 
This new kernel solve my problem.

Code:
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-LM (rev 10)

no more ethtool neither apci fixes.
I have same controller:

Bash:
root@pve2:~# uname -a
Linux pve2 6.8.12-10-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-10 (2025-04-18T07:39Z) x86_64 GNU/Linux
root@pve2:~# lspci -v | grep Ethernet
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-LM (rev 10)
        DeviceName: Onboard - Ethernet
        Subsystem: Gigabyte Technology Co., Ltd Ethernet Connection (7) I219-LM

I only started having this issue on 6.8.12-10-pve yesterday, I haven't restarted the host since 4 months ago and it worked without any issues.

Another host running 6.8.12-5-pve with exact same ethernet card doesn't have any issue at all.

Bash:
root@pve3:~# uname -a
Linux pve3 6.8.12-5-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-5 (2024-12-03T10:26Z) x86_64 GNU/Linux
root@pve3:~# lspci -v | grep Ethernet
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-LM (rev 10)
        DeviceName: Onboard - Ethernet
        Subsystem: Gigabyte Technology Co., Ltd Ethernet Connection (7) I219-LM
 
Last edited:
  • Like
Reactions: ITT