Crash - PVE 8.0.3 Call Trace analysis


New Member
Jul 10, 2023
I upgrade pve 8 from pve 7. One day after, the server was crash. Then I found a crash in /var/log/syslog.

2023-07-10T22:22:46.881144+08:00 pve kernel: [305478.462359] Hardware name: To Be Filled By O.E.M. B660M Pro RS/B660M Pro RS, BIOS 2.01 11/11/2021
2023-07-10T22:22:46.881165+08:00 pve kernel: [305478.462365] Code: ff 00 00 00 00 eb 41 89 c6 48 8b 44 f5 a8 48 83 fe 04 0f 87 52 05 00 00 48 01 d0 48 89 44 f5 a8 4c 89 ce 48 8b 03 4c 89 68 08 <49> 89 45 00 48 8b 06 48 89 58 08 48 89 03 48 89 73 08 48 89 1e 4d
2023-07-10T22:22:46.881166+08:00 pve kernel: [305478.462368] RSP: 0018:ffffa039c079fa98 EFLAGS: 00010002
2023-07-10T22:22:46.881167+08:00 pve kernel: [305478.462371] RAX: ffff93c806052020 RBX: ffffde4f91c97608 RCX: 0000000000000002
2023-07-10T22:22:46.881167+08:00 pve kernel: [305478.462372] RDX: 0000000000000015 RSI: ffffa039c079fbf8 RDI: ffffde4f91c97600
2023-07-10T22:22:46.881168+08:00 pve kernel: [305478.462374] RBP: ffffa039c079fb78 R08: 0000000000000000 R09: ffffa039c079fae8
2023-07-10T22:22:46.881168+08:00 pve kernel: [305478.462375] R10: 0000000000000020 R11: ffffa039c079fdc8 R12: ffff93c806052020
2023-07-10T22:22:46.881169+08:00 pve kernel: [305478.462376] R13: fbffde4f91c97648 R14: 0000000000000015 R15: 0000000000000015
2023-07-10T22:22:46.881169+08:00 pve kernel: [305478.462378] FS:  0000000000000000(0000) GS:ffff93cb9f600000(0000) knlGS:0000000000000000
2023-07-10T22:22:46.881170+08:00 pve kernel: [305478.462379] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
2023-07-10T22:22:46.881172+08:00 pve kernel: [305478.462381] CR2: 00007f92efc35afc CR3: 00000002da810003 CR4: 0000000000772ef0
2023-07-10T22:22:46.881173+08:00 pve kernel: [305478.462382] PKRU: 55555554
2023-07-10T22:22:46.881173+08:00 pve kernel: [305478.462383] Call Trace:
2023-07-10T22:22:46.881174+08:00 pve kernel: [305478.462385]  <TASK>
2023-07-10T22:22:46.881174+08:00 pve kernel: [305478.462387]  shrink_lruvec+0x675/0x1190
2023-07-10T22:22:46.881175+08:00 pve kernel: [305478.462391]  ? mem_cgroup_iter+0x122/0x2f0
2023-07-10T22:22:46.881176+08:00 pve kernel: [305478.462394]  shrink_node+0x2a1/0x710
2023-07-10T22:22:46.881177+08:00 pve kernel: [305478.462396]  balance_pgdat+0x375/0x820
2023-07-10T22:22:46.881177+08:00 pve kernel: [305478.462399]  kswapd+0x208/0x3e0
2023-07-10T22:22:46.881178+08:00 pve kernel: [305478.462400]  ? destroy_sched_domains_rcu+0x30/0x30
2023-07-10T22:22:46.881178+08:00 pve kernel: [305478.462404]  ? balance_pgdat+0x820/0x820
2023-07-10T22:22:46.881179+08:00 pve kernel: [305478.462405]  kthread+0xee/0x120
2023-07-10T22:22:46.881179+08:00 pve kernel: [305478.462408]  ? kthread_complete_and_exit+0x20/0x20
2023-07-10T22:22:46.881181+08:00 pve kernel: [305478.462410]  ret_from_fork+0x1f/0x30
2023-07-10T22:22:46.881181+08:00 pve kernel: [305478.462414]  </TASK>
2023-07-10T22:22:46.881182+08:00 pve kernel: [305478.462415] Modules linked in: dm_snapshot iptable_mangle nvme_fabrics act_police cls_basic sch_ingress sch_htb tcp_diag inet_diag xt_nat xt_tcpudp nft_chain_nat nft_compat cmac nls_utf8 cifs cifs_arc4 cifs_md4 nfsv3 nfs_acl rpcsec_gss_krb5 auth_rpcgss nfsv4 nfs lockd grace fscache netfs veth ebtable_filter ebtables ip_set sctp ip6_udp_tunnel udp_tunnel nf_conntrack_netlink xfrm_user xfrm_algo nf_tables overlay bonding tls softdog ip6table_raw ip6table_filter ip6_tables iptable_raw xt_conntrack iptable_filter xt_MASQUERADE xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nfnetlink_log nf_defrag_ipv4 nfnetlink bpfilter snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio snd_sof_pci_intel_tgl snd_sof_intel_hda_common soundwire_intel soundwire_generic_allocation soundwire_cadence snd_sof_intel_hda snd_sof_pci intel_rapl_msr snd_sof_xtensa_dsp intel_rapl_common intel_tcc_cooling sn2023-07-10T22:29:10.064093+08:00 pve systemd-modules-load[573]: Inserted module 'vfio'

The Call Trace seems unnormal.

How can I found what is wrong with that server? Is it hardware issue?

After is the result of lscpu

root@pve:~# lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         39 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  8
  On-line CPU(s) list:   0-7
Vendor ID:               GenuineIntel
  BIOS Vendor ID:        Intel(R) Corporation
  Model name:            12th Gen Intel(R) Core(TM) i3-12100
    BIOS Model name:     12th Gen Intel(R) Core(TM) i3-12100 To Be Filled By O.E.M. CPU @ 3.2GHz
    BIOS CPU family:     206
    CPU family:          6
    Model:               151
    Thread(s) per core:  2
    Core(s) per socket:  4
    Socket(s):           1
    Stepping:            5
    CPU(s) scaling MHz:  46%
    CPU max MHz:         5500.0000
    CPU min MHz:         800.0000
    BogoMIPS:            6604.80
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe sys
                         call nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_k
                         nown_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt t
                         sc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ib
                         rs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed
                         adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts
                         hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b f
                         srm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization features:
  Virtualization:        VT-x
Caches (sum of all):    
  L1d:                   192 KiB (4 instances)
  L1i:                   128 KiB (4 instances)
  L2:                    5 MiB (4 instances)
  L3:                    12 MiB (1 instance)
  NUMA node(s):          1
  NUMA node0 CPU(s):     0-7
  Itlb multihit:         Not affected
  L1tf:                  Not affected
  Mds:                   Not affected
  Meltdown:              Not affected
  Mmio stale data:       Not affected
  Retbleed:              Not affected
  Spec store bypass:     Mitigation; Speculative Store Bypass disabled via prctl
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
  Srbds:                 Not affected
  Tsx async abort:       Not affected
Can you give me the journalctl --since="2023-07-10 22:12:46"


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!