Proxmox 8.1 - kernel 6.5.11-4 - rcu_sched stall CPU

Nov 23, 2023
8
0
1
I upgraded and patched my system coincidently this afternoon and got proxmox 8.1 right away and immediately ran into the error with the stalling CPUs which has been there before

https://forum.proxmox.com/threads/rcu-info-rcu_sched-self-detected-stall-on-cpu.109112/
https://forum.proxmox.com/threads/rcu_sched-self-detected-stall-on-cpu.68399/
https://forum.proxmox.com/threads/rcu_sched-self-detected-stall-on-cpu.111439/

and more.

I am running 4 server with

- Intel(R) Xeon(R) Silver 4310
- 8 x 4TB Samsung SSD each (as ceph OSDs)
- 768 GB RAM
- 2 x 10Gbit Lacp trunk for storage network

Utilization is

CPU < 50%
Memory <50%
Storage throughput at ~ 250 - 500 MiB/s (it can do 2GiB/s)

running on the new kernel 6.5.11-4 immediately crashed all VM that had a little bit of load. i tried all aio settings with iothreads, that did not help at all.

At last i downgraded to kernel 6.2.16-19

1700785073417.png

and so far it works again, i migrated several VM form hosts with the NEW kernel to the host with the OLD kernel, so far no problem.

There seems to be a problem with the new 6.5.11-4 kernel.

For people with the same problem, i pinned the older kernel like so

Code:
#> proxmox-boot-tool kernel pin 6.2.16-19-pve
 
Last edited:
Can you please provide a journal log from the host for the time the VMs crashed.

Code:
journalctl --since "2023-11-23" --until "2023-11-24" >| $(hostname)-journal.txt
 
One machine
------
Nov 23 23:33:46 GUEST-05.FQDN kernel: rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
Nov 23 23:33:46 GUEST-05.FQDN kernel: rcu: 6-...!: (3 GPs behind) idle=ad4/0/0x0 softirq=97561/97571 fqs=1 (false positive?)
Nov 23 23:33:46 GUEST-05.FQDN kernel: (detected by 1, t=60002 jiffies, g=281213, q=13840)
Nov 23 23:33:46 GUEST-05.FQDN kernel: Sending NMI from CPU 1 to CPUs 6:
Nov 23 23:33:46 GUEST-05.FQDN kernel: NMI backtrace for cpu 6 skipped: idling at native_safe_halt+0xe/0x20
Nov 23 23:33:46 GUEST-05.FQDN kernel: rcu: rcu_sched kthread timer wakeup didn't happen for 59996 jiffies! g281213 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
Nov 23 23:33:46 GUEST-05.FQDN kernel: rcu: Possible timer handling issue on cpu=6 timer-softirq=24253
Nov 23 23:33:46 GUEST-05.FQDN kernel: rcu: rcu_sched kthread starved for 59999 jiffies! g281213 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=6
Nov 23 23:33:46 GUEST-05.FQDN kernel: rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior.
Nov 23 23:33:46 GUEST-05.FQDN kernel: rcu: RCU grace-period kthread stack dump:
Nov 23 23:33:46 GUEST-05.FQDN kernel: task:rcu_sched state:I stack: 0 pid: 13 ppid: 2 flags:0x80004080
Nov 23 23:33:46 GUEST-05.FQDN kernel: Call Trace:
Nov 23 23:33:46 GUEST-05.FQDN kernel: __schedule+0x2d1/0x870
Nov 23 23:33:46 GUEST-05.FQDN kernel: schedule+0x55/0xf0
Nov 23 23:33:46 GUEST-05.FQDN kernel: schedule_timeout+0x197/0x300
Nov 23 23:33:46 GUEST-05.FQDN kernel: ? __next_timer_interrupt+0xf0/0xf0
Nov 23 23:33:46 GUEST-05.FQDN kernel: ? __prepare_to_swait+0x4f/0x80
Nov 23 23:33:46 GUEST-05.FQDN kernel: rcu_gp_kthread+0x512/0x8a0
Nov 23 23:33:46 GUEST-05.FQDN kernel: ? rcu_gp_cleanup+0x3b0/0x3b0
Nov 23 23:33:46 GUEST-05.FQDN kernel: kthread+0x134/0x150
Nov 23 23:33:46 GUEST-05.FQDN kernel: ? set_kthread_struct+0x50/0x50
Nov 23 23:33:46 GUEST-05.FQDN kernel: ret_from_fork+0x35/0x40
Nov 23 23:33:46 GUEST-05.FQDN kernel: rcu: Stack dump where RCU GP kthread last ran:
Nov 23 23:33:46 GUEST-05.FQDN kernel: Sending NMI from CPU 1 to CPUs 6:
Nov 23 23:33:46 GUEST-05.FQDN kernel: NMI backtrace for cpu 6
Nov 23 23:33:46 GUEST-05.FQDN kernel: CPU: 6 PID: 1651 Comm: kubelet Not tainted 4.18.0-477.15.1.el8_8.x86_64 #1
Nov 23 23:33:46 GUEST-05.FQDN kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 4.2023.08-1 11/07/2023
Nov 23 23:33:46 GUEST-05.FQDN kernel: RIP: 0033:0x46eca5
Nov 23 23:33:46 GUEST-05.FQDN kernel: Code: c3 8b 06 8b 4c 1e fc 89 07 89 4c 1f fc c3 48 8b 06 48 89 07 c3 48 8b 06 48 8b 4c 1e f8 48 89 07 48 89 4c 1f f8 c3 f3 0f 6f 06 <f3> 0f 6f 4c 1e f0 f3 0f 7f 07 f3 0f 7f 4c 1f f0 c3 f3 0f 6f 06 f3
Nov 23 23:33:46 GUEST-05.FQDN kernel: RSP: 002b:000000c001163388 EFLAGS: 00000246
Nov 23 23:33:46 GUEST-05.FQDN kernel: RAX: 000000c001336cee RBX: 0000000000000020 RCX: 0000000000000020
Nov 23 23:33:46 GUEST-05.FQDN kernel: RDX: 00000000000000ee RSI: 000000c001f1b840 RDI: 000000c001336cee
Nov 23 23:33:46 GUEST-05.FQDN kernel: RBP: 000000c0011633e0 R08: 000000c001336c00 R09: 0000000000000060
Nov 23 23:33:46 GUEST-05.FQDN kernel: R10: 000000000000000e R11: 0000000004ecc38e R12: 0000000000203000
Nov 23 23:33:46 GUEST-05.FQDN kernel: R13: 0000000000000000 R14: 000000c0014f2b60 R15: 00007f2a4f8f25c3
Nov 23 23:33:46 GUEST-05.FQDN kernel: FS: 00007f2a4f0b3b38 GS: 0000000000000000
-----
And another
-----
Nov 23 19:48:11 GUEST-01.FQDN kernel: rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
Nov 23 19:48:11 GUEST-01.FQDN kernel: rcu: 4-...!: (0 ticks this GP) idle=1ee/1/0x4000000000000000 softirq=2253903/2253903 fqs=0
Nov 23 19:48:11 GUEST-01.FQDN kernel: (detected by 1, t=60002 jiffies, g=3692893, q=21730)
Nov 23 19:48:11 GUEST-01.FQDN kernel: Sending NMI from CPU 1 to CPUs 4:
Nov 23 19:48:11 GUEST-01.FQDN kernel: NMI backtrace for cpu 4
Nov 23 19:48:11 GUEST-01.FQDN kernel: CPU: 4 PID: 149082 Comm: tgtd Not tainted 4.18.0-477.15.1.el8_8.x86_64 #1
Nov 23 19:48:11 GUEST-01.FQDN kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.2-0-gea1b7a073390-prebuilt.qemu.org 04/01/2014
Nov 23 19:48:11 GUEST-01.FQDN kernel: RIP: 0010:entry_SYSCALL_64_safe_stack+0x2/0xf
Nov 23 19:48:11 GUEST-01.FQDN kernel: Code: 01 f8 65 48 89 24 25 14 60 00 00 66 90 0f 20 dc 0f 1f 44 00 00 48 81 e4 ff e7 ff ff 0f 22 dc 65 48 8b 24 25 0c 60 00 00 6a 2b <65> ff 34 25 14 60 00 00 41 53 6a 33 51 50 57 56 52 51 6a da 41 50
Nov 23 19:48:11 GUEST-01.FQDN kernel: RSP: 0018:ffff9fd2113abff8 EFLAGS: 00000006
Nov 23 19:48:11 GUEST-01.FQDN kernel: RAX: 0000000000000007 RBX: 00007efc3c000b60 RCX: 00007efce67760a9
Nov 23 19:48:11 GUEST-01.FQDN kernel: RDX: 00000000ffffffff RSI: 0000000000000001 RDI: 00007efc3c000b60
Nov 23 19:48:11 GUEST-01.FQDN kernel: RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000
Nov 23 19:48:11 GUEST-01.FQDN kernel: R10: 0000000000b5c7ec R11: 0000000000000293 R12: 00000000ffffffff
Nov 23 19:48:11 GUEST-01.FQDN kernel: R13: 0000000001a71020 R14: 00007efc44000bb8 R15: 0000000000000000
Nov 23 19:48:11 GUEST-01.FQDN kernel: FS: 00007efc3a7fc700(0000) GS:ffff90370fd00000(0000) knlGS:0000000000000000
Nov 23 19:48:11 GUEST-01.FQDN kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Nov 23 19:48:11 GUEST-01.FQDN kernel: CR2: 00007f7c8d18f000 CR3: 0000000462880000 CR4: 00000000000006e0
Nov 23 19:48:11 GUEST-01.FQDN kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Nov 23 19:48:11 GUEST-01.FQDN kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Nov 23 19:48:11 GUEST-01.FQDN kernel: Call Trace:
Nov 23 19:48:11 GUEST-01.FQDN kernel: rcu: rcu_sched kthread timer wakeup didn't happen for 59999 jiffies! g3692893 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
Nov 23 19:48:11 GUEST-01.FQDN kernel: rcu: Possible timer handling issue on cpu=4 timer-softirq=520677
Nov 23 19:48:11 GUEST-01.FQDN kernel: rcu: rcu_sched kthread starved for 60002 jiffies! g3692893 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=4
Nov 23 19:48:11 GUEST-01.FQDN kernel: rcu: Unless rcu_sched kthread gets sufficient CPU time, OOM is now expected behavior.
Nov 23 19:48:11 GUEST-01.FQDN kernel: rcu: RCU grace-period kthread stack dump:
Nov 23 19:48:11 GUEST-01.FQDN kernel: task:rcu_sched state:I stack: 0 pid: 13 ppid: 2 flags:0x80004080
Nov 23 19:48:11 GUEST-01.FQDN kernel: Call Trace:
Nov 23 19:48:11 GUEST-01.FQDN kernel: __schedule+0x2d1/0x870
Nov 23 19:48:11 GUEST-01.FQDN kernel: schedule+0x55/0xf0
Nov 23 19:48:11 GUEST-01.FQDN kernel: schedule_timeout+0x197/0x300
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? __next_timer_interrupt+0xf0/0xf0
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? __prepare_to_swait+0x4f/0x80
Nov 23 19:48:11 GUEST-01.FQDN kernel: rcu_gp_kthread+0x512/0x8a0
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? rcu_gp_cleanup+0x3b0/0x3b0
Nov 23 19:48:11 GUEST-01.FQDN kernel: kthread+0x134/0x150
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? set_kthread_struct+0x50/0x50
Nov 23 19:48:11 GUEST-01.FQDN kernel: ret_from_fork+0x35/0x40
Nov 23 19:48:11 GUEST-01.FQDN kernel: rcu: Stack dump where RCU GP kthread last ran:
Nov 23 19:48:11 GUEST-01.FQDN kernel: Sending NMI from CPU 1 to CPUs 4:
Nov 23 19:48:11 GUEST-01.FQDN kernel: NMI backtrace for cpu 4
Nov 23 19:48:11 GUEST-01.FQDN kernel: CPU: 4 PID: 149082 Comm: tgtd Not tainted 4.18.0-477.15.1.el8_8.x86_64 #1
Nov 23 19:48:11 GUEST-01.FQDN kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.2-0-gea1b7a073390-prebuilt.qemu.org 04/01/2014
Nov 23 19:48:11 GUEST-01.FQDN kernel: RIP: 0010:_raw_spin_lock_irqsave+0x5/0x40
Nov 23 19:48:11 GUEST-01.FQDN kernel: Code: 90 83 e8 01 75 e8 65 8b 3d 08 6d 21 79 e8 c3 e2 72 ff 48 29 e8 4c 39 e0 76 cf 80 0b 08 eb 88 90 90 90 90 90 90 0f 1f 44 00 00 <53> 9c 58 0f 1f 44 00 00 48 89 c3 fa 66 0f 1f 44 00 00 31 c0 ba 01
Nov 23 19:48:11 GUEST-01.FQDN kernel: RSP: 0018:ffff9fd2113aba98 EFLAGS: 00000246
Nov 23 19:48:11 GUEST-01.FQDN kernel: RAX: 0000000000000019 RBX: ffff903435ccd998 RCX: ffff902cc42e3c00
Nov 23 19:48:11 GUEST-01.FQDN kernel: RDX: 0000000000000001 RSI: ffff9fd2113abc90 RDI: ffff903435ccd998
Nov 23 19:48:11 GUEST-01.FQDN kernel: RBP: ffff9fd2113abc90 R08: ffff902cc42e3c01 R09: 0000000000000000
Nov 23 19:48:11 GUEST-01.FQDN kernel: R10: 0000000000000000 R11: 0000000000000002 R12: 0000000000000019
Nov 23 19:48:11 GUEST-01.FQDN kernel: R13: ffff903435ccd998 R14: 0000000000000000 R15: ffff9fd2113abb5c
Nov 23 19:48:11 GUEST-01.FQDN kernel: FS: 00007efc3a7fc700(0000) GS:ffff90370fd00000(0000) knlGS:0000000000000000
Nov 23 19:48:11 GUEST-01.FQDN kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Nov 23 19:48:11 GUEST-01.FQDN kernel: CR2: 00007f7c8d18f000 CR3: 0000000462880000 CR4: 00000000000006e0
Nov 23 19:48:11 GUEST-01.FQDN kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Nov 23 19:48:11 GUEST-01.FQDN kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Nov 23 19:48:11 GUEST-01.FQDN kernel: Call Trace:
Nov 23 19:48:11 GUEST-01.FQDN kernel: add_wait_queue+0x1d/0xa0
Nov 23 19:48:11 GUEST-01.FQDN kernel: timerfd_poll+0x32/0x60
Nov 23 19:48:11 GUEST-01.FQDN kernel: do_sys_poll+0x255/0x570
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? cpumask_next_and+0x1a/0x20
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? kick_ilb+0x4b/0xd0
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? available_idle_cpu+0x41/0x50
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? select_idle_sibling+0x141/0x6d0
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? newidle_balance+0x2f8/0x3c0
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? update_load_avg+0x676/0x710
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? poll_initwait+0x40/0x40
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? compat_poll_select_copy_remaining+0x150/0x150
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? check_preempt_curr+0x6a/0xa0
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? ttwu_do_wakeup+0x19/0x170
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? try_to_wake_up+0x1b4/0x4e0
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? __wake_up_common+0x7a/0x190
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? common_interrupt+0xa/0xf
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? common_interrupt+0xa/0xf
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? auditd_test_task+0x24/0x30
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? __audit_syscall_entry+0xf2/0x140
Nov 23 19:48:11 GUEST-01.FQDN kernel: ? syscall_trace_enter+0x1ff/0x2d0
Nov 23 19:48:11 GUEST-01.FQDN kernel: __x64_sys_poll+0x37/0x130
Nov 23 19:48:11 GUEST-01.FQDN kernel: do_syscall_64+0x5b/0x1b0
Nov 23 19:48:11 GUEST-01.FQDN kernel: entry_SYSCALL_64_after_hwframe+0x61/0xc6
Nov 23 19:48:11 GUEST-01.FQDN kernel: RIP: 0033:0x7efce67760a9
Nov 23 19:48:11 GUEST-01.FQDN kernel: Code: 00 41 54 55 41 89 d4 53 48 89 f5 48 89 fb 48 83 ec 10 e8 9a ca f8 ff 44 89 e2 41 89 c0 48 89 ee 48 89 df b8 07 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 31 44 89 c7 89 44 24 0c e8 d3 ca f8 ff 8b 44
Nov 23 19:48:11 GUEST-01.FQDN kernel: RSP: 002b:00007efc3a7fbe90 EFLAGS: 00000293 ORIG_RAX: 0000000000000007
Nov 23 19:48:11 GUEST-01.FQDN kernel: RAX: ffffffffffffffda RBX: 00007efc3c000b60 RCX: 00007efce67760a9
Nov 23 19:48:11 GUEST-01.FQDN kernel: RDX: 00000000ffffffff RSI: 0000000000000001 RDI: 00007efc3c000b60
Nov 23 19:48:11 GUEST-01.FQDN kernel: RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000
Nov 23 19:48:11 GUEST-01.FQDN kernel: R10: 0000000000b5c7ec R11: 0000000000000293 R12: 00000000ffffffff
Nov 23 19:48:11 GUEST-01.FQDN kernel: R13: 0000000001a71020 R14: 00007efc44000bb8 R15: 0000000000000000
 
Hi,
can you please share the configuration of an affected VM, qm config <ID> replacing <ID> with the actual ID. Is there anything in the system log on the source or target host of the migration?

Since you have identical CPUs on your servers, you can try using the host CPU type for the VMs if you aren't already. See the CPU Type secion here for more information: https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_cpu
 
We have the same problem. With E5-2690 v4 and E5-2690 v1 CPUs.
Kernel in the VM doesn't matter, Centos 7, Debian 10,11,12 with their kernels had the problem.
All VMs have CPU-Type kvm64.
Example config of an vm is:

Code:
bootdisk: virtio0
cores: 2
ide2: none,media=cdrom
memory: 8192
name: varnish-t
net0: virtio=3A:41:52:A2:1C:60,bridge=vmbr30
numa: 0
ostype: l26
smbios1: uuid=a344c88b-b049-41dc-b586-d3acc93e85a5
sockets: 2
virtio0: CephSSD01:vm-114-disk-1,size=16G

Here too reverting to a 6.2.16 kernel is the temporary solution.

We just startet on one of our four clusters having now three hosts on proxmox 8.1 and the remaining two still on 8.0.
Even if using host CPU type would help it wouldn't be a solution for us as two of our four clusters have hosts with different CPU generations.

The kern.log on the host doesn't show any unusual lines.
 
Hi,
can you please share the configuration of an affected VM, qm config <ID> replacing <ID> with the actual ID. Is there anything in the system log on the source or target host of the migration?

Since you have identical CPUs on your servers, you can try using the host CPU type for the VMs if you aren't already. See the CPU Type secion here for more information: https://pve.proxmox.com/pve-docs/chapter-qm.html#qm_cpu

this is one with uring and iothreads

[root@HOST-01]: ~ $ qm config 123
agent: 1
boot: order=scsi0
cores: 8
cpu: x86-64-v2-AES
memory: 49152
name: VM-01
net0: vmxnet3=2A:4E:52:10:18:5E,bridge=vmbr0,tag=404
numa: 0
scsi0: store01:vm-123-disk-0,iothread=1,size=100G
scsi1: store01:vm-123-disk-2,iothread=1,size=1500G
scsihw: virtio-scsi-single
smbios1: uuid=74a87e33-0e3c-4d51-8c67-13d5c26775c6
sockets: 1
tags: ansible_controlled;linux
vmgenid: 506116fd-931f-4633-a8dc-61ae0eb4cd91

and here is one with aio=threads and iothread enabled

[root@HOST-03]: ~ $ qm config 117
agent: 1
bios: ovmf
boot: order=scsi0
cores: 8
cpu: x86-64-v2-AES
efidisk0: store01:vm-117-disk-2,efitype=4m,pre-enrolled-keys=1,size=528K
memory: 49152
name: VM-05
net0: vmxnet3=00:50:56:b5:c4:23,bridge=vmbr0,tag=404
numa: 0
scsi0: store01:vm-117-disk-0,aio=threads,cache=none,iothread=1,size=32G
scsi1: store01:vm-117-disk-1,aio=threads,cache=none,iothread=1,size=1500G
scsihw: virtio-scsi-single
smbios1: uuid=2c43bd44-4bd2-4f3f-a110-afdfa1ad6f75
sockets: 1
tags: ansible_controlled;linux
vmgenid: 4b6343bd-57fb-4472-9129-0569ebb0f7ed


switching to "host" is nto an option, since we are planning to add hardware that is slightly older. However, i already tried x86-64-v4, that does not make a difference.


------


i can see this

Nov 23 23:57:47 HOST-03 kernel: x86/split lock detection: #AC: CPU 1/KVM/175802 took a split_lock trap at address: 0xbfe5d050
Nov 23 23:57:47 HOST-03 kernel: x86/split lock detection: #AC: CPU 5/KVM/175806 took a split_lock trap at address: 0xbfe5d050
Nov 23 23:57:47 HOST-03 kernel: x86/split lock detection: #AC: CPU 4/KVM/175805 took a split_lock trap at address: 0xbfe5d050
Nov 23 23:57:47 HOST-03 kernel: x86/split lock detection: #AC: CPU 2/KVM/175803 took a split_lock trap at address: 0xbfe5d050
Nov 23 23:57:47 HOST-03 kernel: x86/split lock detection: #AC: CPU 3/KVM/175804 took a split_lock trap at address: 0xbfe5d050
Nov 23 23:57:47 HOST-03 kernel: x86/split lock detection: #AC: CPU 6/KVM/175807 took a split_lock trap at address: 0xbfe5d050
Nov 23 23:57:47 HOST-03 kernel: x86/split lock detection: #AC: CPU 7/KVM/175808 took a split_lock trap at address: 0xbfe5d050

but i am not 100% certain whether it was during a crash - but i can find this on all systems, anything specific i should be looking for ?
 
Hello All
I did the upgrade on our cluster (Lab and not PROD for heaven's sake) today, ran into the same problem. Needed to reset all vms (Debian and Alma).
We do run the vm with cpu type default and our cluster has three nodes (HPE DL 360 Gen 9 with Intel(R) Xeon(R) CPU E5-2690 v3.
 
Unfortunately, the issue is not too easy to reproduce on our end, we are still investigating.

While it's not clear that it's the same issue, a user on the German forum also mentioned running into RCU stalls and found that turning the pcid CPU flag off was a workaround: https://forum.proxmox.com/threads/136948/post-609358 Somebody might want to try that too as an alternative to downgrading the kernel.

Nov 23 23:57:47 HOST-03 kernel: x86/split lock detection: #AC: CPU 1/KVM/175802 took a split_lock trap at address: 0xbfe5d050
Nov 23 23:57:47 HOST-03 kernel: x86/split lock detection: #AC: CPU 5/KVM/175806 took a split_lock trap at address: 0xbfe5d050
Nov 23 23:57:47 HOST-03 kernel: x86/split lock detection: #AC: CPU 4/KVM/175805 took a split_lock trap at address: 0xbfe5d050
Nov 23 23:57:47 HOST-03 kernel: x86/split lock detection: #AC: CPU 2/KVM/175803 took a split_lock trap at address: 0xbfe5d050
Nov 23 23:57:47 HOST-03 kernel: x86/split lock detection: #AC: CPU 3/KVM/175804 took a split_lock trap at address: 0xbfe5d050
Nov 23 23:57:47 HOST-03 kernel: x86/split lock detection: #AC: CPU 6/KVM/175807 took a split_lock trap at address: 0xbfe5d050
Nov 23 23:57:47 HOST-03 kernel: x86/split lock detection: #AC: CPU 7/KVM/175808 took a split_lock trap at address: 0xbfe5d050
This should just be warnings and not cause actual issues AFAIK (except a bit of performance). See here for more information: https://forum.proxmox.com/threads/x86-split-lock-detection.111544/post-486595
 
Hello All
I did the upgrade on our cluster (Lab and not PROD for heaven's sake) today, ran into the same problem. Needed to reset all vms (Debian and Alma).
We do run the vm with cpu type default and our cluster has three nodes (HPE DL 360 Gen 9 with Intel(R) Xeon(R) CPU E5-2690 v3.
Can you please share the VM configuration?
 
I think I'm able to reproduce the issue now on some nodes with Intel(R) Xeon(R) CPU E5-2620 v4. Will try to narrow down/bisect to find the kernel change introducing the issue.
 
I think I'm able to reproduce the issue now on some nodes with Intel(R) Xeon(R) CPU E5-2620 v4. Will try to narrow down/bisect to find the kernel change introducing the issue.
Our Lab is running: 48 x Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz (2 Sockets)
(HPE DL 360 Gen 9)
We had that RCU too.
Our Production is running on AMD EPYC, so there are no problems to expect? Or better to wait one week with upgrade?
 
Can you please share the VM configuration?
One of the vms that crashed
Code:
agent: 1
boot: order=scsi0;ide2;net0
cores: 2
ide2: none,media=cdrom
memory: 4096
meta: creation-qemu=8.0.2,ctime=1693310244
name: grafana-01.lab.ch
net0: virtio=AA:34:95:60:78:F8,bridge=vmbr0,firewall=1,tag=303
numa: 0
ostype: l26
scsi0: performance:vm-106-disk-0,discard=on,iothread=1,size=22G
scsihw: virtio-scsi-single
smbios1: uuid=4e35441a-5c31-4869-9dbb-f573d1ce8f6e
sockets: 1
vmgenid: dc4ccfc0-bd7b-4345-98a7-218b93a27c84
 
here is the qm config for a vm that had the issue:
Code:
# qm config  902
bootdisk: scsi0
cores: 1
ide2: none,media=cdrom
memory: 1024
name: ldap-master2
net0: virtio=92:EC:4F:23:5C:37,bridge=vmbr3,tag=3
numa: 0
onboot: 1
ostype: l26
protection: 1
scsi0: nvme-4tb:vm-902-disk-0,discard=on,size=30G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=1213762c-1580-4314-89a8-5a57837d0bd2
sockets: 4
vmgenid: ac029fbe-3d63-4a11-9888-bf1b05ea1312

None or the vast majority of or kvm have a cpu type set.


we use ceph storage.
 
Yes, also ran across this issue too. I did find another post about it, but looks like this one has more action. Basically same issue on Proxmox 8.1.3(on I5 12450H cluster) with a Debian guest. Variations of migrations using a virtio NIC and KVM64, Qemu64,x86-64-v2, x86-64-v2-aes, x86-64-v3 and host CPU profiles would cause the RCU issue above. Qemu agent stats would go blank and the guest vm would freeze up. Definitely disabling PCID under the advanced section of the CPU profiles helped a lot where 99% of migrations with any CPU profile and virtual NIC would be fine.
 
FYI, a preliminary fix for the issue was applied in git and will be included in the next kernel build. There is no package available yet, but should be soon if no issues pop up during internal testing.

EDIT: build is currently available on the pvetest repository. You can temporarily enable it (e.g. via the Repositories window in the UI, select your node, it's a sub-entry of Updates), run apt update, pull in the updated kernel with apt install proxmox-kernel-6.5, disable the repository and run apt update again.

EDIT2: To be specific, the first version with the fix is 6.5.11-5. The package is also availabel on the no-subscription repository since a while and likely not too long until it'll be available on the enterprise repository too.

Our Production is running on AMD EPYC, so there are no problems to expect? Or better to wait one week with upgrade?
If I didn't miss anything, nobody with AMD CPUs reported the issue yet and the fix from upstream also talks about Intel CPUs. So you should be fine, but it also shouldn't be too long until a fixed kernel is available.
 
Last edited:
Hello
we have the rcu issue . 5 nodes with all systems using CPU "type 80 x Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz (2 Sockets)"

How do I set -pcid ?
I did it via the Interface
 

Attachments

  • Screenshot_20231128_121623.png
    Screenshot_20231128_121623.png
    202.2 KB · Views: 37
Same here with hosts running Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz.

Problem resolved with 6.5.11-4-pve.

Thank you!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!