kernel BUG at lib/dynamic_queue_limits.c:27!

I can confirm this also. Running four I225-V rev03 in a N5105 box. It starting acting up all of a sudden, been running rock solid for several months since the PVE8 upgrade. Been running 6.2.16-15-pve and also tried 6.1.10-1-pve but they are all acting up. Funny thing is that one of the 2.5GB ports are running fine (WAN) on 1gbps. But the three other ports are all causing the kernel to spew tons of errors. Now I got it semi stable when one of the ports suddenly only negotiated 100 mbps and seems stable. And yes I have tried plenty of CAT6e cables and switches ;-)

Are the I225-V chips burning out? Or what are we talking about here?


Code:
[Tue Oct 10 11:46:09 2023] ------------[ cut here ]------------
[Tue Oct 10 11:46:09 2023] NETDEV WATCHDOG: enp5s0 (igc): transmit queue 2 timed out
[Tue Oct 10 11:46:09 2023] WARNING: CPU: 3 PID: 0 at net/sched/sch_generic.c:525 dev_watchdog+0x23a/0x250
[Tue Oct 10 11:46:09 2023] Modules linked in: wireguard curve25519_x86_64 libchacha20poly1305 chacha_x86_64 poly1305_x86_64 libcurve25519_generic libchacha ip6_udp_tunnel udp_tunnel nf_conntrack_netlink xt_nat xt_tcpudp xt_conntrack xt_MASQUERADE xfrm_user xfrm_algo iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_addrtype overlay veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter bpfilter nf_tables scsi_transport_iscsi softdog binfmt_misc bonding tls nfnetlink_log nfnetlink snd_hda_codec_hdmi snd_sof_pci_intel_icl snd_sof_intel_hda_common soundwire_intel soundwire_generic_allocation soundwire_cadence snd_sof_intel_hda snd_sof_pci snd_sof_xtensa_dsp snd_sof snd_sof_utils snd_soc_hdac_hda snd_hda_ext_core x86_pkg_temp_thermal i915 snd_soc_acpi_intel_match intel_powerclamp snd_soc_acpi soundwire_bus snd_soc_core snd_compress coretemp ac97_bus snd_pcm_dmaengine kvm_intel snd_hda_intel drm_buddy snd_intel_dspcfg ttm snd_intel_sdw_acpi kvm
[Tue Oct 10 11:46:09 2023]  snd_hda_codec drm_display_helper irqbypass crct10dif_pclmul snd_hda_core cec snd_hwdep polyval_generic rc_core ghash_clmulni_intel sha512_ssse3 processor_thermal_device_pci_legacy intel_rapl_msr processor_thermal_device processor_thermal_rfim aesni_intel drm_kms_helper processor_thermal_mbox crypto_simd cryptd processor_thermal_rapl intel_cstate snd_pcm wmi_bmof snd_timer cmdlinepart pcspkr snd intel_rapl_common i2c_algo_bit mei_me spi_nor int340x_thermal_zone syscopyarea sysfillrect mtd ee1004 8250_dw soundcore mei sysimgblt intel_soc_dts_iosf acpi_pad acpi_tad mac_hid zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) nfsd spl(O) vhost_net vhost auth_rpcgss vhost_iotlb tap nfs_acl lockd grace drm efi_pstore sunrpc dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b_generic xor raid6_pq simplefb cdc_ether usbnet r8152 uas usb_storage mii dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio libcrc32c spi_pxa2xx_platform dw_dmac dw_dmac_core nvme
[Tue Oct 10 11:46:09 2023]  xhci_pci nvme_core xhci_pci_renesas i2c_i801 spi_intel_pci ahci nvme_common crc32_pclmul spi_intel libahci xhci_hcd i2c_smbus igc intel_lpss_pci intel_lpss idma64 video wmi pinctrl_jasperlake
[Tue Oct 10 11:46:09 2023] CPU: 3 PID: 0 Comm: swapper/3 Tainted: P           O       6.2.16-15-pve #1
[Tue Oct 10 11:46:09 2023] Hardware name: Default string Default string/CW-N6000, BIOS 5.19 04/25/2022
[Tue Oct 10 11:46:09 2023] RIP: 0010:dev_watchdog+0x23a/0x250
[Tue Oct 10 11:46:09 2023] Code: 00 e9 2b ff ff ff 48 89 df c6 05 ac 5d 7d 01 01 e8 bb 08 f8 ff 44 89 f1 48 89 de 48 c7 c7 90 87 e0 a5 48 89 c2 e8 56 91 30 ff <0f> 0b e9 1c ff ff ff 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00
[Tue Oct 10 11:46:09 2023] RSP: 0018:ffffb417801dce38 EFLAGS: 00010246
[Tue Oct 10 11:46:09 2023] RAX: 0000000000000000 RBX: ffff9bbd52484000 RCX: 0000000000000000
[Tue Oct 10 11:46:09 2023] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[Tue Oct 10 11:46:09 2023] RBP: ffffb417801dce68 R08: 0000000000000000 R09: 0000000000000000
[Tue Oct 10 11:46:09 2023] R10: 0000000000000000 R11: 0000000000000000 R12: ffff9bbd524844c8
[Tue Oct 10 11:46:09 2023] R13: ffff9bbd5248441c R14: 0000000000000002 R15: 0000000000000000
[Tue Oct 10 11:46:09 2023] FS:  0000000000000000(0000) GS:ffff9bc0aff80000(0000) knlGS:0000000000000000
[Tue Oct 10 11:46:09 2023] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[Tue Oct 10 11:46:09 2023] CR2: 0000000820c5d558 CR3: 0000000107ce2000 CR4: 0000000000352ee0
[Tue Oct 10 11:46:09 2023] Call Trace:
[Tue Oct 10 11:46:09 2023]  <IRQ>
[Tue Oct 10 11:46:09 2023]  ? show_regs+0x6d/0x80
[Tue Oct 10 11:46:09 2023]  ? __warn+0x89/0x160
[Tue Oct 10 11:46:09 2023]  ? dev_watchdog+0x23a/0x250
[Tue Oct 10 11:46:09 2023]  ? report_bug+0x17e/0x1b0
[Tue Oct 10 11:46:09 2023]  ? handle_bug+0x46/0x90
[Tue Oct 10 11:46:09 2023]  ? exc_invalid_op+0x18/0x80
[Tue Oct 10 11:46:09 2023]  ? asm_exc_invalid_op+0x1b/0x20
[Tue Oct 10 11:46:09 2023]  ? dev_watchdog+0x23a/0x250
[Tue Oct 10 11:46:09 2023]  ? dev_watchdog+0x23a/0x250
[Tue Oct 10 11:46:09 2023]  ? __pfx_dev_watchdog+0x10/0x10
[Tue Oct 10 11:46:09 2023]  call_timer_fn+0x29/0x160
[Tue Oct 10 11:46:09 2023]  ? __pfx_dev_watchdog+0x10/0x10
[Tue Oct 10 11:46:09 2023]  __run_timers+0x259/0x310
[Tue Oct 10 11:46:09 2023]  run_timer_softirq+0x1d/0x40
[Tue Oct 10 11:46:09 2023]  __do_softirq+0xd6/0x346
[Tue Oct 10 11:46:09 2023]  ? hrtimer_interrupt+0x11f/0x250
[Tue Oct 10 11:46:09 2023]  __irq_exit_rcu+0xa2/0xd0
[Tue Oct 10 11:46:09 2023]  irq_exit_rcu+0xe/0x20
[Tue Oct 10 11:46:09 2023]  sysvec_apic_timer_interrupt+0x92/0xd0
[Tue Oct 10 11:46:09 2023]  </IRQ>
[Tue Oct 10 11:46:09 2023]  <TASK>
[Tue Oct 10 11:46:09 2023]  asm_sysvec_apic_timer_interrupt+0x1b/0x20
[Tue Oct 10 11:46:09 2023] RIP: 0010:native_safe_halt+0xb/0x10
[Tue Oct 10 11:46:09 2023] Code: a0 60 65 a6 e8 c6 da 7d ff e9 3e ff ff ff cc 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 66 90 0f 00 2d 79 b6 37 00 fb f4 <c3> cc cc cc cc 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 66
[Tue Oct 10 11:46:09 2023] RSP: 0018:ffffb4178013bde0 EFLAGS: 00000246
[Tue Oct 10 11:46:09 2023] RAX: 0000000000004000 RBX: ffff9bbd411d3864 RCX: 0000000000000000
[Tue Oct 10 11:46:09 2023] RDX: 0000000000000001 RSI: ffff9bbd411d3800 RDI: 0000000000000001
[Tue Oct 10 11:46:09 2023] RBP: ffffb4178013bdf0 R08: 0000000000000000 R09: 0000000000000000
[Tue Oct 10 11:46:09 2023] R10: 0000000000000000 R11: 0000000000000000 R12: ffff9bbd411d3864
[Tue Oct 10 11:46:09 2023] R13: 0000000000000003 R14: ffffffffa68d6ca0 R15: ffff9bc0aff80000
[Tue Oct 10 11:46:09 2023]  ? acpi_idle_do_entry+0x82/0xc0
[Tue Oct 10 11:46:09 2023]  acpi_idle_enter+0xbb/0x180
[Tue Oct 10 11:46:09 2023]  cpuidle_enter_state+0x9a/0x6f0
[Tue Oct 10 11:46:09 2023]  cpuidle_enter+0x2e/0x50
[Tue Oct 10 11:46:09 2023]  do_idle+0x216/0x2a0
[Tue Oct 10 11:46:09 2023]  cpu_startup_entry+0x1d/0x20
[Tue Oct 10 11:46:09 2023]  start_secondary+0x122/0x160
[Tue Oct 10 11:46:09 2023]  secondary_startup_64_no_verify+0xe5/0xeb
[Tue Oct 10 11:46:09 2023]  </TASK>
[Tue Oct 10 11:46:09 2023] ---[ end trace 0000000000000000 ]---
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: Register Dump
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: Register Name   Value
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: CTRL            181c0641
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: STATUS          40280693
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: CTRL_EXT        10000040
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: MDIC            180a3c00
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: ICR             00000081
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: RCTL            0440803a
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: RDLEN[0-3]      00001000 00001000 00001000 00001000
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: RDH[0-3]        0000002c 00000000 000000bb 00000002
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: RDT[0-3]        0000002b 000000ff 000000ba 00000001
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: RXDCTL[0-3]     02040808 02040808 02040808 02040808
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: RDBAL[0-3]      11c70000 11c73000 11c76000 11c79000
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: RDBAH[0-3]      00000001 00000001 00000001 00000001
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: TCTL            a503f0fa
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: TDBAL[0-3]      11c5e000 11c63000 11c68000 11c6d000
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: TDBAH[0-3]      00000001 00000001 00000001 00000001
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: TDLEN[0-3]      00001000 00001000 00001000 00001000
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: TDH[0-3]        00000008 00000010 0000002b 00000035
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: TDT[0-3]        00000008 00000010 0000002d 00000036
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: TXDCTL[0-3]     02100108 02100108 02100108 02100108
[Tue Oct 10 11:46:09 2023] igc 0000:05:00.0 enp5s0: Reset adapter
[Tue Oct 10 11:46:10 2023] vmbr0: port 1(enp5s0) entered disabled state
 
Last edited:
Hi, could you try installing proxmox-kernel-6.2.16-16-pve (which is currently available in pvetest [1]) and see whether the situation improves? This kernel version contains a number of fixes related to the igc module which are not part of earlier versions.

[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_test_repo

Ok. Running 6.2.16-16 now. On reboot one of the LAN interfaces only negotiated 100mbps again. I used ethtool to bump it up to 1gbps. I'm not expecting this to fail anytime soon, but will report back when something unexpected occurs :D
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!