[SOLVED] Kernel-Trace nach Updates und Neustart

JensF

Well-Known Member
Feb 14, 2020
222
69
48
Hab gerade die neusten Updates installiert (PVE8 no-sub) und bekomme folgenden Trace:
Aug 17 17:53:58 HPP-PVE-SRV1 kernel: ------------[ cut here ]------------ Aug 17 17:53:58 HPP-PVE-SRV1 kernel: Use slab_build_skb() instead Aug 17 17:53:58 HPP-PVE-SRV1 kernel: WARNING: CPU: 4 PID: 0 at net/core/skbuff.c:347 __build_skb_around+0x11f/0x130 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: Modules linked in: xt_conntrack xt_tcpudp nft_chain_nat xt_nat nf_nat nf_conntrack nf_defrag_ipv6 cfg80211 nf_defrag_ipv4 nft_compat veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter bpfilter nf_tables nvme_fabrics bonding tls sunrpc nfnetlink_log nfnetlink binfmt_misc ipmi_ssif intel_rapl_msr intel_rapl_common sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul polyval_clmulni polyval_generic mgag200 ghash_clmulni_intel sha512_ssse3 drm_shmem_helper drm_kms_helper aesni_intel i2c_algo_bit ipmi_si mei_me crypto_simd ipmi_devintf mei cryptd rapl syscopyarea joydev input_leds sysfillrect mac_hid dcdbas sysimgblt intel_cstate acpi_power_meter pcspkr ipmi_msghandler zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) vhost_net vhost vhost_iotlb tap drm efi_pstore dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b_generic xor raid6_pq libcrc32c Aug 17 17:53:58 HPP-PVE-SRV1 kernel: simplefb hid_generic usbmouse usbkbd usbhid hid nvme crc32_pclmul ahci ehci_pci nvme_core lpc_ich libahci ehci_hcd megaraid_sas tg3 nvme_common wmi Aug 17 17:53:58 HPP-PVE-SRV1 kernel: CPU: 4 PID: 0 Comm: swapper/4 Tainted: P O 6.2.16-8-pve #1 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: Hardware name: Dell Inc. PowerEdge R720/0W7JN5, BIOS 2.9.0 12/06/2019 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: RIP: 0010:__build_skb_around+0x11f/0x130 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: Code: 37 d7 66 ff 48 39 c3 0f 84 1d ff ff ff 0f 0b 48 89 c3 e9 13 ff ff ff 48 c7 c7 c1 12 20 bd c6 05 8a 2f 88 01 01 e8 c1 67 3b ff <0f> 0b eb bc 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 90 90 90 90 90 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: RSP: 0018:ffffbccd88384d30 EFLAGS: 00010246 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: RAX: 0000000000000000 RBX: ffff92a29224c000 RCX: 0000000000000000 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: RBP: ffffbccd88384d48 R08: 0000000000000000 R09: 0000000000000000 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: R13: ffff92830724f400 R14: 0000000000000000 R15: ffff92831b5d89c0 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: FS: 0000000000000000(0000) GS:ffff92a1ff880000(0000) knlGS:0000000000000000 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: CR2: 0000562b64435078 CR3: 0000002755a10002 CR4: 00000000001706e0 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: Call Trace: Aug 17 17:53:58 HPP-PVE-SRV1 kernel: <IRQ> Aug 17 17:53:58 HPP-PVE-SRV1 kernel: __build_skb+0x4e/0x70 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: build_skb+0x17/0xc0 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: tg3_poll_work+0x638/0xf90 [tg3] Aug 17 17:53:58 HPP-PVE-SRV1 kernel: tg3_poll_msix+0x46/0x1b0 [tg3] Aug 17 17:53:58 HPP-PVE-SRV1 kernel: __napi_poll+0x33/0x1f0 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: net_rx_action+0x180/0x2d0 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: ? ktime_get+0x48/0xc0 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: ? __napi_schedule+0x71/0xa0 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: __do_softirq+0xd9/0x346 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: ? handle_irq_event+0x52/0x80 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: ? handle_edge_irq+0xda/0x250 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: __irq_exit_rcu+0xa2/0xd0 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: irq_exit_rcu+0xe/0x20 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: common_interrupt+0xa4/0xb0 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: </IRQ> Aug 17 17:53:58 HPP-PVE-SRV1 kernel: <TASK> Aug 17 17:53:58 HPP-PVE-SRV1 kernel: asm_common_interrupt+0x27/0x40 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: RIP: 0010:cpuidle_enter_state+0xde/0x6f0 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: Code: 20 b7 43 e8 14 73 4a ff 8b 53 04 49 89 c7 0f 1f 44 00 00 31 ff e8 42 7b 49 ff 80 7d d0 00 0f 85 eb 00 00 00 fb 0f 1f 44 00 00 <45> 85 f6 0f 88 12 02 00 00 4d 63 ee 49 83 fd 09 0f 87 c7 04 00 00 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: RSP: 0018:ffffbccd80147e38 EFLAGS: 00000246 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: RAX: 0000000000000000 RBX: ffffdcad7f881208 RCX: 0000000000000000 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: RDX: 0000000000000004 RSI: 0000000000000000 RDI: 0000000000000000 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: RBP: ffffbccd80147e88 R08: 0000000000000000 R09: 0000000000000000 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffffffffbdcc3640 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: R13: 0000000000000004 R14: 0000000000000004 R15: 00000006d650d312 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: ? cpuidle_enter_state+0xce/0x6f0 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: cpuidle_enter+0x2e/0x50 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: do_idle+0x216/0x2a0 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: cpu_startup_entry+0x1d/0x20 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: start_secondary+0x122/0x160 Aug 17 17:53:58 HPP-PVE-SRV1 kernel: secondary_startup_64_no_verify+0xe5/0xeb Aug 17 17:53:58 HPP-PVE-SRV1 kernel: </TASK> Aug 17 17:53:58 HPP-PVE-SRV1 kernel: ---[ end trace 0000000000000000 ]---
Es funkioniert augenscheinlich alles. Zumindest kann ich kein Problem festellen.
Irgendjemand eine Idee?
Edit: Der Vollständigkeit halber:
proxmox-ve: 8.0.2 (running kernel: 6.2.16-8-pve) pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390) pve-kernel-6.2: 8.0.5 proxmox-kernel-helper: 8.0.3 proxmox-kernel-6.2.16-8-pve: 6.2.16-8 proxmox-kernel-6.2: 6.2.16-8 proxmox-kernel-6.2.16-6-pve: 6.2.16-7 pve-kernel-6.2.16-3-pve: 6.2.16-3 ceph-fuse: 17.2.6-pve1+3 corosync: 3.1.7-pve3 criu: 3.17.1-2 glusterfs-client: 10.3-5 ifupdown2: 3.2.0-1+pmx3 ksm-control-daemon: 1.4-1 libjs-extjs: 7.0.0-4 libknet1: 1.25-pve1 libproxmox-acme-perl: 1.4.6 libproxmox-backup-qemu0: 1.4.0 libproxmox-rs-perl: 0.3.1 libpve-access-control: 8.0.4 libpve-apiclient-perl: 3.3.1 libpve-common-perl: 8.0.7 libpve-guest-common-perl: 5.0.4 libpve-http-server-perl: 5.0.4 libpve-rs-perl: 0.8.5 libpve-storage-perl: 8.0.2 libspice-server1: 0.15.1-1 lvm2: 2.03.16-2 lxc-pve: 5.0.2-4 lxcfs: 5.0.3-pve3 novnc-pve: 1.4.0-2 proxmox-backup-client: 3.0.2-1 proxmox-backup-file-restore: 3.0.2-1 proxmox-kernel-helper: 8.0.3 proxmox-mail-forward: 0.2.0 proxmox-mini-journalreader: 1.4.0 proxmox-widget-toolkit: 4.0.6 pve-cluster: 8.0.3 pve-container: 5.0.4 pve-docs: 8.0.4 pve-edk2-firmware: 3.20230228-4 pve-firewall: 5.0.3 pve-firmware: 3.7-1 pve-ha-manager: 4.0.2 pve-i18n: 3.0.5 pve-qemu-kvm: 8.0.2-4 pve-xtermjs: 4.16.0-3 qemu-server: 8.0.6 smartmontools: 7.3-pve1 spiceterm: 3.3.0 swtpm: 0.8.0+pve1 vncterm: 1.8.0 zfsutils-linux: 2.1.12-pve1
 
Last edited:
Hi,
ein kurzer Blick auf den Kernel-Code zeigt, dass es "nur" eine Deprecation-Warnung für Entwickler*innen ist:
Code:
/* Caller must provide SKB that is memset cleared */
static void __build_skb_around(struct sk_buff *skb, void *data,
                   unsigned int frag_size)
{
    unsigned int size = frag_size;

    /* frag_size == 0 is considered deprecated now. Callers
     * using slab buffer should use slab_build_skb() instead.
     */
    if (WARN_ONCE(size == 0, "Use slab_build_skb() instead"))
        data = __slab_build_skb(skb, data, &size);

    __finalize_skb_around(skb, data, size);
}

Also ja, sollte nichts kaputt sein. Aber habe das mal Upstream reported: https://lkml.org/lkml/2023/8/18/156
 
  • Like
Reactions: JensF
Nach Update auf Kernel 6.5.11-3 kommt dieser Trace nicht mehr.
Bisher sind keine Probleme mit dem neuen Kernel aufgetaucht.
 
  • Like
Reactions: Falk R.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!