igb NETDEV WATCHDOG - Network Interface is crashing with 6.17

jokergermany

New Member
Feb 12, 2026
10
0
1
When the first crash occoured I had this Problem:
Code:
[ 5880.143371] igb 0000:05:00.0 nic1: NETDEV WATCHDOG: CPU: 6: transmit queue 1 timed out 5021 ms

I switched to proxmox two weeks ago.
Before proxmox i had OMV (Debian) with 6.1.0 on the system and no poblems.

This is the error message from today:
https://paste.tchncs.de/upload/snake-squid-otter

cropped version:
Code:
[Mon Feb 23 12:18:21 2026] vmbr1: port 2(tap100i1) entered forwarding state
[Mon Feb 23 13:35:41 2026] igb 0000:05:00.0 nic1: PCIe link lost
[Mon Feb 23 13:35:41 2026] ------------[ cut here ]------------
[Mon Feb 23 13:35:41 2026] igb: Failed to read reg 0xc030!
[Mon Feb 23 13:35:41 2026] WARNING: CPU: 6 PID: 701384 at drivers/net/ethernet/intel/igb/igb_main.c:724 igb_rd32.cold+0x3a/0x46 [igb]
[Mon Feb 23 13:35:41 2026] Modules linked in: ip6t_REJECT nf_reject_ipv6 udp_diag ipt_REJECT nf_reject_ipv4 xt_multiport tcp_diag inet_diag bluetooth xt_conntrack xt_MASQUERADE xfrm_user xfrm_algo xt_set xt_addrtype nft_compat nft_nat nft_ct nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib cfg80211 nft_masq nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 overlay veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter nf_tables bonding tls softdog sunrpc binfmt_misc nfnetlink_log sch_fq_codel amd_atl intel_rapl_msr intel_rapl_common amd64_edac edac_mce_amd ipmi_ssif kvm_amd snd_hda_intel snd_hda_codec snd_hda_core kvm snd_intel_dspcfg snd_intel_sdw_acpi snd_hwdep irqbypass acpi_ipmi snd_pcm polyval_clmulni snd_timer ipmi_si ghash_clmulni_intel aesni_intel snd ipmi_devintf rapl k10temp pcspkr soundcore ast ccp ipmi_msghandler mac_hid zfs(PO) spl(O) msr vhost_net vhost vhost_iotlb tap efi_pstore nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b_generic xor raid6_pq
[Mon Feb 23 13:35:41 2026]  dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio nvme xhci_pci igb nvme_core ahci i2c_piix4 i2c_algo_bit nvme_keyring xhci_hcd dca libahci i2c_smbus nvme_auth 8250_dw
[Mon Feb 23 13:35:41 2026] CPU: 6 UID: 0 PID: 701384 Comm: kworker/6:0 Tainted: P           O        6.17.9-1-pve #1 PREEMPT(voluntary)
[Mon Feb 23 13:35:41 2026] Tainted: [P]=PROPRIETARY_MODULE, [O]=OOT_MODULE
[Mon Feb 23 13:35:41 2026] Hardware name: GIGABYTE G431-MM0-OT/MJ11-EC1-OT, BIOS F09 09/14/2021
[Mon Feb 23 13:35:41 2026] Workqueue: events igb_watchdog_task [igb]
[Mon Feb 23 13:35:41 2026] RIP: 0010:igb_rd32.cold+0x3a/0x46 [igb]
[Mon Feb 23 13:35:41 2026] Code: c0 49 89 44 24 08 e8 3a 97 a5 e3 49 8b bc 24 28 ff ff ff e8 bd 68 40 e4 84 c0 74 15 89 de 48 c7 c7 08 a2 8f c0 e8 8b 17 b5 e3 <0f> 0b e9 18 c6 fd ff e9 13 c6 fd ff 0f b6 d0 be 00 00 04 00 48 c7
[Mon Feb 23 13:35:41 2026] RSP: 0018:ffffd3c1543e7d80 EFLAGS: 00010246
[Mon Feb 23 13:35:41 2026] RAX: 0000000000000000 RBX: 000000000000c030 RCX: 0000000000000000
[Mon Feb 23 13:35:41 2026] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[Mon Feb 23 13:35:41 2026] RBP: ffffd3c1543e7d90 R08: 0000000000000000 R09: 0000000000000000
[Mon Feb 23 13:35:41 2026] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8b8960584fd8
[Mon Feb 23 13:35:41 2026] R13: 0000000000000000 R14: 0000000000000000 R15: ffff8b8960a38bc0
[Mon Feb 23 13:35:41 2026] FS:  0000000000000000(0000) GS:ffff8b8ca8486000(0000) knlGS:0000000000000000
[Mon Feb 23 13:35:41 2026] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[Mon Feb 23 13:35:41 2026] CR2: 00007f017046de28 CR3: 0000000102756000 CR4: 00000000003506f0
[Mon Feb 23 13:35:41 2026] Call Trace:
[Mon Feb 23 13:35:41 2026]  <TASK>
[Mon Feb 23 13:35:41 2026]  igb_update_stats+0x9b/0x850 [igb]
[Mon Feb 23 13:35:41 2026]  igb_watchdog_task+0xbb/0x480 [igb]
[Mon Feb 23 13:35:41 2026]  ? psi_avgs_work+0x64/0xe0
[Mon Feb 23 13:35:41 2026]  process_one_work+0x18b/0x370
[Mon Feb 23 13:35:41 2026]  worker_thread+0x33a/0x480
[Mon Feb 23 13:35:41 2026]  ? __pfx_worker_thread+0x10/0x10
[Mon Feb 23 13:35:41 2026]  kthread+0x10b/0x220
[Mon Feb 23 13:35:41 2026]  ? __pfx_kthread+0x10/0x10
[Mon Feb 23 13:35:41 2026]  ret_from_fork+0x208/0x240
[Mon Feb 23 13:35:41 2026]  ? __pfx_kthread+0x10/0x10
[Mon Feb 23 13:35:41 2026]  ret_from_fork_asm+0x1a/0x30
[Mon Feb 23 13:35:41 2026]  </TASK>
[Mon Feb 23 13:35:41 2026] ---[ end trace 0000000000000000 ]---
[Mon Feb 23 13:36:04 2026] igb 0000:05:00.0 nic1: NETDEV WATCHDOG: CPU: 0: transmit queue 1 timed out 5333 ms
[Mon Feb 23 13:36:04 2026] igb 0000:05:00.0 nic1: Reset adapter
[Mon Feb 23 13:36:05 2026] vmbr0: port 1(nic1) entered disabled state
[Mon Feb 23 13:40:06 2026] vmbr0: port 3(fwpr100p0) entered disabled state
[Mon Feb 23 13:40:06 2026] vmbr0: port 2(veth111i0) entered disabled state
[Mon Feb 23 13:40:06 2026] vmbr0: port 6(veth114i0) entered disabled state
[Mon Feb 23 13:40:06 2026] vmbr0: port 5(veth113i0) entered disabled state
[Mon Feb 23 13:40:06 2026] vmbr0: port 4(veth112i0) entered disabled state
[Mon Feb 23 13:40:06 2026] fwpr100p0: left allmulticast mode
[Mon Feb 23 13:40:06 2026] fwpr100p0: left promiscuous mode
[Mon Feb 23 13:40:06 2026] vmbr0: port 3(fwpr100p0) entered disabled state
[Mon Feb 23 13:40:06 2026] veth111i0: left allmulticast mode
[Mon Feb 23 13:40:06 2026] veth111i0: left promiscuous mode
[Mon Feb 23 13:40:06 2026] vmbr0: port 2(veth111i0) entered disabled state
[Mon Feb 23 13:40:06 2026] veth114i0: left allmulticast mode
[Mon Feb 23 13:40:06 2026] veth114i0: left promiscuous mode
[Mon Feb 23 13:40:06 2026] vmbr0: port 6(veth114i0) entered disabled state
[Mon Feb 23 13:40:06 2026] veth113i0: left allmulticast mode
[Mon Feb 23 13:40:06 2026] veth113i0: left promiscuous mode
[Mon Feb 23 13:40:06 2026] vmbr0: port 5(veth113i0) entered disabled state
[Mon Feb 23 13:40:06 2026] veth112i0: left allmulticast mode
[Mon Feb 23 13:40:06 2026] veth112i0: left promiscuous mode
[Mon Feb 23 13:40:06 2026] vmbr0: port 4(veth112i0) entered disabled state
[Mon Feb 23 13:40:06 2026] igb 0000:05:00.0 nic1: left allmulticast mode
[Mon Feb 23 13:40:06 2026] igb 0000:05:00.0 nic1: left promiscuous mode
[Mon Feb 23 13:40:06 2026] vmbr0: port 1(nic1) entered disabled state
[Mon Feb 23 13:40:06 2026] igb 0000:05:00.0: removed PHC on nic1
[Mon Feb 23 13:40:08 2026] pci 0000:05:00.0: working around ROM BAR overlap defect
[Mon Feb 23 13:40:08 2026] pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 PCIe Endpoint
[Mon Feb 23 13:40:08 2026] pci 0000:05:00.0: BAR 0 [mem 0x00000000-0x0007ffff]
[Mon Feb 23 13:40:08 2026] pci 0000:05:00.0: BAR 2 [io  0x0000-0x001f]
[Mon Feb 23 13:40:08 2026] pci 0000:05:00.0: BAR 3 [mem 0x00000000-0x00003fff]
[Mon Feb 23 13:40:08 2026] pci 0000:05:00.0: PME# supported from D0 D3hot D3cold
[Mon Feb 23 13:40:08 2026] pcieport 0000:00:01.5: ASPM: current common clock configuration is inconsistent, reconfiguring
[Mon Feb 23 13:40:08 2026] pci 0000:05:00.0: BAR 0 [mem 0xee700000-0xee77ffff]: assigned
[Mon Feb 23 13:40:08 2026] pci 0000:05:00.0: BAR 3 [mem 0xee780000-0xee783fff]: assigned
[Mon Feb 23 13:40:08 2026] pci 0000:05:00.0: BAR 2 [io  0xd000-0xd01f]: assigned
[Mon Feb 23 13:40:08 2026] igb 0000:05:00.0: enabling device (0000 -> 0002)
[Mon Feb 23 13:40:08 2026] igb 0000:05:00.0: added PHC on eth0
[Mon Feb 23 13:40:08 2026] igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection
[Mon Feb 23 13:40:08 2026] igb 0000:05:00.0: eth0: (PCIe:2.5Gb/s:Width x1) d8:5e:d3:15:97:da
[Mon Feb 23 13:40:08 2026] igb 0000:05:00.0: eth0: PBA No: 200214-007
[Mon Feb 23 13:40:08 2026] igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)
[Mon Feb 23 13:40:08 2026] igb 0000:05:00.0 nic1: renamed from eth0
[Mon Feb 23 13:40:10 2026] vmbr0: port 1(nic1) entered blocking state
[Mon Feb 23 13:40:10 2026] vmbr0: port 1(nic1) entered disabled state
[Mon Feb 23 13:40:10 2026] igb 0000:05:00.0 nic1: entered allmulticast mode
[Mon Feb 23 13:40:10 2026] igb 0000:05:00.0 nic1: entered promiscuous mode
[Mon Feb 23 13:40:10 2026] vmbr0: port 2(veth112i0) entered blocking state
[Mon Feb 23 13:40:10 2026] vmbr0: port 2(veth112i0) entered disabled state
[Mon Feb 23 13:40:10 2026] veth112i0: entered allmulticast mode
[Mon Feb 23 13:40:10 2026] veth112i0: entered promiscuous mode
[Mon Feb 23 13:40:10 2026] vmbr0: port 2(veth112i0) entered blocking state
[Mon Feb 23 13:40:10 2026] vmbr0: port 2(veth112i0) entered forwarding state
[Mon Feb 23 13:40:10 2026] veth112i1: left allmulticast mode

It recovered because i made a script:
Code:
MUTTER="192.168.2.1"      # IP der Gegenstelle zum Pingen
IP="192.168.2.10"     # Deine eigene IP
GWIP="192.168.2.1"    # Dein Gateway
NETWORK0="10.0.1.0"   # Erstes Subnetz
NETWORK1="10.0.2.0"   # Zweites Subnetz
BRIDGE="vmbr0"
PCI_ID="0000:05:00.0"

PING=`/usr/bin/ping -w 5 -W 1 -c 3 $MUTTER` > /dev/null
echo "NETWORKGUARD: $PING" > /dev/null
RC=`echo "$PING" | /usr/bin/grep -c "100% packet loss"`
if [ $RC -gt 0 ]; then
    set -x
        /usr/bin/echo "[NETWORKGUARD] ALARM --- ALARM --- ALARM";
        /usr/bin/echo "$PING"
        echo "[NETWORKGUARD] Harter PCI-Reset für $PCI_ID" >&2

        RC=`/usr/sbin/ip addr | /usr/bin/grep -c $IP`
# 1. Bridge sicherheitshalber stoppen
        /usr/sbin/ifdown $BRIDGE --force
   # /usr/sbin/ifdown nic1 --force
        #/usr/sbin/rmmod igb
# 2. Die Karte "virtuell" aus dem PCI-Bus ziehen
         echo 1 > /sys/bus/pci/devices/$PCI_ID/remove
         /usr/bin/sleep 2
# 3. Den PCI-Bus scannen, um die Karte wieder zu finden
         echo 1 >/sys/bus/pci/rescan
       # /usr/sbin/modprobe igb
         /usr/sbin/sleep 5
# 4. Namen prüfen und udev triggern
         /usr/bin/udevadm trigger --attr-match=subsystem=net
         /usr/bin/udevadm settle --timeout=10
# 5. Alles wieder hochfahren
     /usr/sbin/ifup nic1 --force
     /usr/bin/sleep 1
    
     /usr/sbin/ifup $BRIDGE --force

/usr/bin/echo "[NETWORKGUARD] Verbinde verwaiste Gast-Interfaces neu..." >&2

# 1. Suche nach allen Interfaces, die zu VMs (tap) oder Containern (veth) gehören
# und binde sie wieder an die Bridge vmbr0
      for port in $(ip link show | grep -E 'tap[0-9]+i[0-9]+|veth[0-9]+i[0-9]+' | awk -F': ' '{print $2}' | cut -d'@' -f1); do
          /usr/bin/echo "  -> Verbinde $port mit vmbr0" >&2
          /usr/bin/ip link set $port master vmbr0
          /usr/bin/ip link set $port up
      done

# 2. Kurzer Sicherheitscheck: Ist die Bridge jetzt wieder befüllt?
      /usr/sbin/brctl show vmbr0 >&2 

        if [ $RC -lt 1 ]; then
                # normales Netzwerk
                ip route add default gw $GWIP >&2
        else
   echo "fehler";
#               /usr/sbin/route add -net $NETWORK0 netmask 255.255.255.224 eth0
#/usr/sbin/route add -net $NETWORK1 netmask 255.255.255.224 eth0
 ip route add default gw $GWIP >&2
        fi
        hostname=`/usr/bin/hostname`
        date=`/usr/bin/date -R`
  #     echo "From: <networkcontrol@$hostname>"
  #     echo "To: <security@meine-domain.de>"
        echo "Subject: $hostname - Netzwerkkarten Ausfall"
        echo "Date: $date"
        echo "Netzwerkkarte ausgefallen : $date"

  
    # Mail versenden über das System-Mail-Interface
   # printf "Netzwerkkarte ausgefallen: $DATE\n\nPing-Statistik:\n$PING" | /usr/bin/mail -s "$SUBJECT" "$RECIPIENT"


fi
This runs in a crontab every 5 Minutes
Code:
+ /usr/bin/echo [NETWORKGUARD] ALARM --- ALARM --- ALARM
[NETWORKGUARD] ALARM --- ALARM --- ALARM
+ /usr/bin/echo PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.

--- 192.168.2.1 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4129ms
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.

--- 192.168.2.1 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4129ms
+ echo [NETWORKGUARD] Harter PCI-Reset für 0000:05:00.0
[NETWORKGUARD] Harter PCI-Reset für 0000:05:00.0
+ /usr/sbin/ip addr
+ /usr/bin/grep -c 192.168.2.10
+ RC=1
+ /usr/sbin/ifdown vmbr0 --force
+ echo 1
+ /usr/bin/sleep 2
+ echo 1
+ /usr/sbin/sleep 5
/mnt/wichtig/server/networkguard.sh: 29: /usr/sbin/sleep: not found
+ /usr/bin/udevadm trigger --attr-match=subsystem=net
+ /usr/bin/udevadm settle --timeout=10
+ /usr/sbin/ifup nic1 --force
+ /usr/bin/sleep 1
+ /usr/sbin/ifup vmbr0 --force
+ /usr/bin/echo [NETWORKGUARD] Verbinde verwaiste Gast-Interfaces neu...
[NETWORKGUARD] Verbinde verwaiste Gast-Interfaces neu...
+ ip link show
+ grep -E tap[0-9]+i[0-9]+|veth[0-9]+i[0-9]+
+ awk -F:  {print $2}
+ cut -d@ -f1
+ /usr/bin/echo   -> Verbinde veth112i0 mit vmbr0
  -> Verbinde veth112i0 mit vmbr0
+ /usr/bin/ip link set veth112i0 master vmbr0
+ /usr/bin/ip link set veth112i0 up
+ /usr/bin/echo   -> Verbinde veth112i1 mit vmbr0
  -> Verbinde veth112i1 mit vmbr0
+ /usr/bin/ip link set veth112i1 master vmbr0
+ /usr/bin/ip link set veth112i1 up
+ /usr/bin/echo   -> Verbinde veth113i0 mit vmbr0
  -> Verbinde veth113i0 mit vmbr0
+ /usr/bin/ip link set veth113i0 master vmbr0
+ /usr/bin/ip link set veth113i0 up
+ /usr/bin/echo   -> Verbinde veth113i1 mit vmbr0
  -> Verbinde veth113i1 mit vmbr0
+ /usr/bin/ip link set veth113i1 master vmbr0
+ /usr/bin/ip link set veth113i1 up
+ /usr/bin/echo   -> Verbinde veth114i0 mit vmbr0
  -> Verbinde veth114i0 mit vmbr0
+ /usr/bin/ip link set veth114i0 master vmbr0
+ /usr/bin/ip link set veth114i0 up
+ /usr/bin/echo   -> Verbinde veth114i1 mit vmbr0
  -> Verbinde veth114i1 mit vmbr0
+ /usr/bin/ip link set veth114i1 master vmbr0
+ /usr/bin/ip link set veth114i1 up
+ /usr/bin/echo   -> Verbinde veth111i0 mit vmbr0
  -> Verbinde veth111i0 mit vmbr0
+ /usr/bin/ip link set veth111i0 master vmbr0
+ /usr/bin/ip link set veth111i0 up
+ /usr/bin/echo   -> Verbinde veth111i1 mit vmbr0
  -> Verbinde veth111i1 mit vmbr0
+ /usr/bin/ip link set veth111i1 master vmbr0
+ /usr/bin/ip link set veth111i1 up
+ /usr/bin/echo   -> Verbinde veth101i0 mit vmbr0
  -> Verbinde veth101i0 mit vmbr0
+ /usr/bin/ip link set veth101i0 master vmbr0
+ /usr/bin/ip link set veth101i0 up
+ /usr/sbin/brctl show vmbr0
bridge name    bridge id        STP enabled    interfaces
vmbr0        8000.d85ed31597da    no        nic1
                            veth101i0
                            veth111i0
                            veth111i1
                            veth112i0
                            veth112i1
                            veth113i0
                            veth113i1
                            veth114i0
                            veth114i1
+ [ 1 -lt 1 ]
+ echo fehler
fehler
+ ip route add default gw 192.168.2.1
Error: either "to" is duplicate, or "gw" is garbage.
+ /usr/bin/hostname
+ hostname=proxmox
+ /usr/bin/date -R
+ date=Tue, 24 Feb 2026 07:55:11 +0100
+ echo Subject: proxmox - Netzwerkkarten Ausfall
+ echo Date: Tue, 24 Feb 2026 07:55:11 +0100
Date: Tue, 24 Feb 2026 07:55:11 +0100
+ echo Netzwerkkarte ausgefallen : Tue, 24 Feb 2026 07:55:11 +0100
Netzwerkkarte ausgefallen : Tue, 24 Feb 2026 07:55:11 +0100

Before i tried setting this:
ethtool -G nic1 rx 4096
ethtool -G nic1 tx 4096

Don't work

I read this:
pcie_aspm.policy=performance
But i don't want to have more energy consumption.

With OMV i had no problems and another one who uses the same Intel Corporation I210 Gigabit Network Connection has no problem with proxmox and 6.8.8
 
Last edited: