VM CPU issues: watchdog: BUG: soft lockup - CPU#7 stuck for 22s!

I am getting "watchdog: BUG: soft lockup - CPU#x stuck..." message on a host with local-only storage, no VMs, no ZFS, running Proxmox 9.0.1. It's just Proxmox running alone off local NVMe SSD.
 
>running Proxmox 9.0.1

so, why are you running proxmox 9.0.1 instead of 9.0.11 ?
Sorry, typo:
PU(s) 16 x 13th Gen Intel(R) Core(TM) i5-13500H (1 Socket)
Kernel Version Linux 6.14.11-4-pve (2025-10-10T08:04Z)
Boot Mode EFI (Secure Boot)
Manager Version pve-manager/9.0.11/3bf5476b8a4699e2
 
  • Like
Reactions: RolandK
I am getting "watchdog: BUG: soft lockup - CPU#x stuck..." message on a host with local-only storage, no VMs, no ZFS, running Proxmox 9.0.1. It's just Proxmox running alone off local NVMe SSD.
Same. Yesterday got. But have CIFS to another machine from host side mapped to one LXC (PBS). Latest version too.

System Information Manufacturer: MINIX | Product Name: NEO Z150-0dB
Kernel Version Linux 6.14.11-4-pve (2025-10-10T08:04Z)
Boot Mode EFI (Secure Boot)
Manager Version pve-manager/9.0.11/3bf5476b8a4699e2

Code:
Nov 03 22:01:55 pve pvestatd[979]: proxmox-backup-client failed: Error: http request timed out
Nov 03 22:01:55 pve pvestatd[979]: status update time (120.242 seconds)
Nov 03 22:02:56 pve kernel: CIFS: VFS: \\192.168.0.100 has not responded in 180 seconds. Reconnecting...
Nov 03 22:02:56 pve kernel: CIFS: VFS: close cached dir rc -11
Nov 03 22:03:06 pve pvestatd[979]: proxmox-backup-client failed: Error: EHOSTDOWN: Host is down
Nov 03 22:03:07 pve pvestatd[979]: status update time (71.617 seconds)
Nov 03 22:03:43 pve kernel: watchdog: BUG: soft lockup - CPU#2 stuck for 26s! [tokio-runtime-w:622612]
Nov 03 22:03:43 pve kernel: Modules linked in: dm_snapshot bluetooth nf_conntrack_netlink xt_nat xt_tcpudp macvlan xt_conntrack xt_MASQUERADE xt_set nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_addrtype nft_compat xfrm_user xfrm_algo overlay cmac nls_utf8 cifs cifs_arc4 nls_ucs2_utils rdma_cm >
Nov 03 22:03:43 pve kernel:  x86_pkg_temp_thermal soundwire_bus intel_powerclamp snd_soc_sdca coretemp snd_soc_avs kvm_intel snd_soc_hda_codec snd_hda_ext_core i915 snd_soc_core kvm snd_compress ac97_bus snd_pcm_dmaengine snd_hda_intel sch_fq_codel snd_intel_dspcfg irqbypass snd_intel_sdw_acpi polyval_clmulni polyv>
Nov 03 22:03:43 pve kernel: CPU: 2 UID: 34 PID: 622612 Comm: tokio-runtime-w Tainted: P        W  O       6.14.11-4-pve #1
 
The io_thread method cannot help with OS installation, since the ISO was used and the system became unresponsive during the process. I'm on versoin 9.

The only way to bypass this soft freeze is to stop all other VMs, which is very annoying. Otherwise, no new VM can be 'installed'.

2025-11-09_15-18-04.png
 
I'm on Proxmox 9.0.11 with a similar problem. For storage they were using local, LVM-Thin storage. It should be pretty fast storage, it's just LVM-Thin running on some NVME disks.

I already had IO thread checked (aka iothread=1)

And my 2 VM's were using 'VirtIO SCSI single' for the SCSI Controller. But the CPU was set to something other than "host"... so I just changed the CPU to 'host', and while I was at it I set the Async IO (aka aio) to "threads" instead of the default of 'io_uring'.

I don't know if reducing the cores, changing CPU to host, and setting Async IO to 'threads' (aka aio=threads) will fix the issue but it's all I could come up with after going over this forum post and comments.

----

EDIT: Nope, that did not do the trick. i don't see the error in the kernel logs in the VM's anymore but there's still clearly something going on with resources, any type of activity in one VM will cause the other VM to become unresponsive until i stop the activity in the 1st VM. This is very odd.
 
Last edited:
I have exactly the same problems since a view days...
Not all VMs are effected but some (randomly) and resetting is the only chance to get them back.

1765779921330.png

Proxmox 9.1.2
Linux proxmox 6.17.2-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.17.2-2 (2025-11-26T12:33Z) x86_64
Boot Mode EFI (Secure Boot)
pve-manager/9.1.2/9d436f37a0ac4172 (running kernel: 6.17.2-2-pve)


Any ideas?
 
Same here : in a vm. It happens at backup time :

Code:
déc. 06 21:04:20 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
déc. 06 21:04:32 nextcloud kernel: watchdog: BUG: soft lockup - CPU#1 stuck for 149s! [kcompactd0:67]
déc. 06 21:04:32 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
déc. 06 21:04:44 nextcloud kernel: watchdog: BUG: soft lockup - CPU#7 stuck for 101s! [kworker/7:1:963237]
déc. 06 21:04:44 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
déc. 06 21:04:48 nextcloud kernel: watchdog: BUG: soft lockup - CPU#2 stuck for 179s! [kworker/2:3:981254]
déc. 06 21:04:48 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
déc. 06 21:04:51 nextcloud kernel: watchdog: BUG: soft lockup - CPU#4 stuck for 174s! [php-fpm8.3:981202]
déc. 06 21:04:51 nextcloud kernel: watchdog: BUG: soft lockup - CPU#6 stuck for 148s! [php-fpm8.3:982878]
déc. 06 21:04:51 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
déc. 06 21:04:51 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
déc. 11 21:01:00 nextcloud kernel: watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [kworker/0:0:3719795]
déc. 11 21:01:00 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
déc. 12 21:02:40 nextcloud kernel: watchdog: BUG: soft lockup - CPU#6 stuck for 22s! [kcompactd0:67]
déc. 12 21:02:40 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
déc. 12 21:03:08 nextcloud kernel: watchdog: BUG: soft lockup - CPU#6 stuck for 48s! [kcompactd0:67]
déc. 12 21:03:09 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
déc. 12 21:03:22 nextcloud kernel: watchdog: BUG: soft lockup - CPU#4 stuck for 72s! [swapper/4:0]
déc. 12 21:03:22 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
déc. 12 21:03:22 nextcloud kernel: watchdog: BUG: soft lockup - CPU#1 stuck for 69s! [kworker/u16:0:42652]
déc. 12 21:03:22 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
déc. 12 21:03:56 nextcloud kernel: watchdog: BUG: soft lockup - CPU#7 stuck for 26s! [kworker/7:1:30049]
déc. 12 21:03:56 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
déc. 18 21:00:56 nextcloud kernel: watchdog: BUG: soft lockup - CPU#3 stuck for 26s! [kworker/3:3:2805252]
déc. 18 21:00:56 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
déc. 18 21:01:16 nextcloud kernel: watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [kcompactd0:67]
déc. 18 21:01:16 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
déc. 18 21:01:24 nextcloud kernel: watchdog: BUG: soft lockup - CPU#3 stuck for 52s! [kworker/3:3:2805252]
déc. 18 21:01:24 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
déc. 18 21:01:39 nextcloud kernel: watchdog: BUG: soft lockup - CPU#7 stuck for 43s! [postgres:2823746]
déc. 18 21:01:39 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
janv. 31 21:01:32 nextcloud kernel: watchdog: BUG: soft lockup - CPU#5 stuck for 22s! [kworker/5:1:3754200]
janv. 31 21:01:32 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
févr. 07 21:01:48 nextcloud kernel: watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [kworker/3:1:2696771]
févr. 07 21:01:48 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
févr. 07 21:01:52 nextcloud kernel: watchdog: BUG: soft lockup - CPU#1 stuck for 22s! [kcompactd0:67]
févr. 07 21:01:52 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
févr. 07 21:02:16 nextcloud kernel: watchdog: BUG: soft lockup - CPU#3 stuck for 48s! [kworker/3:1:2696771]
févr. 07 21:02:16 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
févr. 07 21:02:20 nextcloud kernel: watchdog: BUG: soft lockup - CPU#1 stuck for 48s! [kcompactd0:67]
févr. 07 21:02:20 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
févr. 07 21:02:24 nextcloud kernel: watchdog: BUG: soft lockup - CPU#7 stuck for 51s! [php-fpm8.3:2719309]
févr. 07 21:02:24 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
févr. 07 21:02:24 nextcloud kernel: watchdog: BUG: soft lockup - CPU#6 stuck for 51s! [php-fpm8.3:2719226]
févr. 07 21:02:24 nextcloud kernel:  ? lockup_detector_update_enable+0x60/0x60
 
I have the same problem, I think.
No updates available and at that time backups are done already.


Code:
Apr 02 08:32:13 pve2 kernel: watchdog: BUG: soft lockup - CPU#1 stuck for 48s! [kcompactd0:50]
Apr 02 08:32:13 pve2 kernel: Modules linked in: nf_conntrack_netlink xt_conntrack xfrm_user xfrm_algo xt_set xt_addrtype overlay nft_chain_nat xt_MASQUERADE xt_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_tcpudp nft_compat veth cmac nls_utf8 cifs cifs_arc4 nls_ucs2_utils rdma_cm iw_cm ib_cm ib_core cifs_md4 netfs ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter sctp ip6_udp_tunnel udp_tunnel nf_tables bonding tls softdog binfmt_misc nfnetlink_log snd_hda_codec_intelhdmi snd_hda_intel snd_sof_pci_intel_tgl snd_sof_pci_intel_cnl snd_sof_intel_hda_generic soundwire_intel snd_sof_intel_hda_sdw_bpt snd_sof_intel_hda_common snd_soc_hdac_hda snd_sof_intel_hda_mlink snd_sof_intel_hda snd_hda_codec_hdmi xe soundwire_cadence snd_sof_pci snd_sof_xtensa_dsp snd_sof snd_sof_utils snd_soc_acpi_intel_match snd_soc_acpi_intel_sdca_quirks soundwire_generic_allocation snd_soc_acpi soundwire_bus gpu_sched snd_soc_sdca drm_gpuvm crc8 intel_rapl_msr drm_gpusvm_helper intel_rapl_common
Apr 02 08:32:13 pve2 kernel:  drm_ttm_helper snd_soc_avs nfsd drm_exec rtw88_8822ce drm_suballoc_helper snd_soc_hda_codec x86_pkg_temp_thermal snd_hda_ext_core rtw88_8822c intel_powerclamp auth_rpcgss coretemp snd_hda_codec rtw88_pci nfs_acl snd_usb_audio snd_hda_core rtw88_core lockd snd_intel_dspcfg snd_usbmidi_lib kvm_intel snd_intel_sdw_acpi snd_ump grace snd_hwdep mei_hdcp mei_pxp kvm snd_rawmidi sch_fq_codel sunrpc snd_soc_core mac80211 snd_compress irqbypass ac97_bus snd_pcm_dmaengine snd_seq_device polyval_clmulni ghash_clmulni_intel snd_pcm i915 cmdlinepart drm_buddy aesni_intel btusb snd_timer spi_nor btrtl rapl cfg80211 btintel intel_cstate mei_me ttm snd btbcm wmi_bmof btmtk pcspkr mc libarc4 mtd ee1004 soundcore bluetooth drm_display_helper mei intel_pmc_core pmt_telemetry cec pmt_discovery pmt_class rc_core intel_hid intel_pmc_ssram_telemetry input_leds joydev i2c_algo_bit sparse_keymap intel_vsec acpi_pad acpi_tad mac_hid zfs(PO) nvme_fabrics nvme_core spl(O) nvme_keyring vhost_net nvme_auth vhost efi_pstore vhost_iotlb
Apr 02 08:32:13 pve2 kernel:  tap nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b_generic xor raid6_pq hid_generic rndis_host cdc_ether usbnet mii usbhid hid uas usb_storage dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio sdhci_pci r8169 sdhci_uhs2 realtek ahci intel_lpss_pci sdhci i2c_i801 xhci_pci cqhci spi_intel_pci i2c_mux libahci intel_lpss i2c_smbus xhci_hcd spi_intel idma64 video wmi pinctrl_alderlake
Apr 02 08:32:13 pve2 kernel: CPU: 1 UID: 0 PID: 50 Comm: kcompactd0 Tainted: P      D W  O L      6.17.13-2-pve #1 PREEMPT(voluntary)
Apr 02 08:32:13 pve2 kernel: Tainted: [P]=PROPRIETARY_MODULE, [D]=DIE, [W]=WARN, [O]=OOT_MODULE, [L]=SOFTLOCKUP
Apr 02 08:32:13 pve2 kernel: Hardware name: SZ ReachingTech Limited DreamQuest ADN2L/, BIOS 5.27 09/24/2024
Apr 02 08:32:13 pve2 kernel: RIP: 0010:native_queued_spin_lock_slowpath+0x81/0x2d0
Apr 02 08:32:13 pve2 kernel: Code: 00 00 f0 0f ba 2b 08 0f 92 c2 8b 03 0f b6 d2 c1 e2 08 30 e4 09 d0 3d ff 00 00 00 77 5f 85 c0 74 10 0f b6 03 84 c0 74 09 f3 90 <0f> b6 03 84 c0 75 f7 b8 01 00 00 00 66 89 03 5b 41 5c 41 5d 41 5e
Apr 02 08:32:13 pve2 kernel: RSP: 0018:ffffd0bb4029b7d8 EFLAGS: 00000202
Apr 02 08:32:13 pve2 kernel: RAX: 0000000000000001 RBX: fffffcc48cfed4a8 RCX: 000fffffffe00000
Apr 02 08:32:13 pve2 kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: fffffcc48cfed4a8
Apr 02 08:32:13 pve2 kernel: RBP: ffffd0bb4029b7f8 R08: 0000000000000000 R09: 0000000000000000
Apr 02 08:32:13 pve2 kernel: R10: ffffff8000000000 R11: 0000000000000000 R12: ffff8950c9b70860
Apr 02 08:32:13 pve2 kernel: R13: ffff8951bfb52ea8 R14: ffffd0bb4029b928 R15: ffffd0bb4029b820
Apr 02 08:32:13 pve2 kernel: FS:  0000000000000000(0000) GS:ffff895740e06000(0000) knlGS:0000000000000000
Apr 02 08:32:13 pve2 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Apr 02 08:32:13 pve2 kernel: CR2: 00007b69fa356330 CR3: 000000011143c004 CR4: 0000000000f72ef0
Apr 02 08:32:13 pve2 kernel: PKRU: 55555554
Apr 02 08:32:13 pve2 kernel: Call Trace:
Apr 02 08:32:13 pve2 kernel:  <TASK>
Apr 02 08:32:13 pve2 kernel:  _raw_spin_lock+0x3f/0x60
Apr 02 08:32:13 pve2 kernel:  __pte_offset_map_lock+0xa5/0x130
Apr 02 08:32:13 pve2 kernel:  page_vma_mapped_walk+0x826/0x990
Apr 02 08:32:13 pve2 kernel:  ? __lruvec_stat_mod_folio+0x8b/0xf0
Apr 02 08:32:13 pve2 kernel:  remove_migration_pte+0x86/0x7a0
Apr 02 08:32:13 pve2 kernel:  __rmap_walk_file+0xc3/0x1f0
Apr 02 08:32:13 pve2 kernel:  rmap_walk+0x43/0xa0
Apr 02 08:32:13 pve2 kernel:  migrate_pages_batch+0xc4c/0xec0
Apr 02 08:32:13 pve2 kernel:  ? __pfx_compaction_free+0x10/0x10
Apr 02 08:32:13 pve2 kernel:  ? __pfx_remove_migration_pte+0x10/0x10
Apr 02 08:32:13 pve2 kernel:  migrate_pages+0xb13/0xda0
Apr 02 08:32:13 pve2 kernel:  ? __pfx_compaction_free+0x10/0x10
Apr 02 08:32:13 pve2 kernel:  ? __pfx_compaction_alloc+0x10/0x10
Apr 02 08:32:13 pve2 kernel:  compact_zone+0xb8e/0x1150
Apr 02 08:32:13 pve2 kernel:  kcompactd_do_work+0x167/0x240
Apr 02 08:32:13 pve2 kernel:  kcompactd.cold+0x5b/0x9e
Apr 02 08:32:13 pve2 kernel:  ? __pfx_autoremove_wake_function+0x10/0x10
Apr 02 08:32:13 pve2 kernel:  ? __pfx_kcompactd+0x10/0x10
Apr 02 08:32:13 pve2 kernel:  kthread+0x108/0x220
Apr 02 08:32:13 pve2 kernel:  ? __pfx_kthread+0x10/0x10
Apr 02 08:32:13 pve2 kernel:  ret_from_fork+0x205/0x240
Apr 02 08:32:13 pve2 kernel:  ? __pfx_kthread+0x10/0x10
Apr 02 08:32:13 pve2 kernel:  ret_from_fork_asm+0x1a/0x30
Apr 02 08:32:13 pve2 kernel:  </TASK>
Apr 02 08:32:16 pve2 watchdog-mux[1307]: client watchdog expired - disable watchdog updates
Apr 02 08:32:17 pve2 watchdog-mux[1307]: exit watchdog-mux with active connections
Apr 02 08:32:17 pve2 systemd-journald[338]: Received client request to sync journal.
Apr 02 08:32:17 pve2 kernel: watchdog: watchdog0: watchdog did not stop!

The machine rebooted after that.
 
Last edited:
I have the same problem, I think.
No updates available and at that time backups are done already.


Code:
Apr 02 08:32:13 pve2 kernel: watchdog: BUG: soft lockup - CPU#1 stuck for 48s! [kcompactd0:50]
Apr 02 08:32:13 pve2 kernel: Modules linked in: nf_conntrack_netlink xt_conntrack xfrm_user xfrm_algo xt_set xt_addrtype overlay nft_chain_nat xt_MASQUERADE xt_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_tcpudp nft_compat veth cmac nls_utf8 cifs cifs_arc4 nls_ucs2_utils rdma_cm iw_cm ib_cm ib_core cifs_md4 netfs ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter sctp ip6_udp_tunnel udp_tunnel nf_tables bonding tls softdog binfmt_misc nfnetlink_log snd_hda_codec_intelhdmi snd_hda_intel snd_sof_pci_intel_tgl snd_sof_pci_intel_cnl snd_sof_intel_hda_generic soundwire_intel snd_sof_intel_hda_sdw_bpt snd_sof_intel_hda_common snd_soc_hdac_hda snd_sof_intel_hda_mlink snd_sof_intel_hda snd_hda_codec_hdmi xe soundwire_cadence snd_sof_pci snd_sof_xtensa_dsp snd_sof snd_sof_utils snd_soc_acpi_intel_match snd_soc_acpi_intel_sdca_quirks soundwire_generic_allocation snd_soc_acpi soundwire_bus gpu_sched snd_soc_sdca drm_gpuvm crc8 intel_rapl_msr drm_gpusvm_helper intel_rapl_common
Apr 02 08:32:13 pve2 kernel:  drm_ttm_helper snd_soc_avs nfsd drm_exec rtw88_8822ce drm_suballoc_helper snd_soc_hda_codec x86_pkg_temp_thermal snd_hda_ext_core rtw88_8822c intel_powerclamp auth_rpcgss coretemp snd_hda_codec rtw88_pci nfs_acl snd_usb_audio snd_hda_core rtw88_core lockd snd_intel_dspcfg snd_usbmidi_lib kvm_intel snd_intel_sdw_acpi snd_ump grace snd_hwdep mei_hdcp mei_pxp kvm snd_rawmidi sch_fq_codel sunrpc snd_soc_core mac80211 snd_compress irqbypass ac97_bus snd_pcm_dmaengine snd_seq_device polyval_clmulni ghash_clmulni_intel snd_pcm i915 cmdlinepart drm_buddy aesni_intel btusb snd_timer spi_nor btrtl rapl cfg80211 btintel intel_cstate mei_me ttm snd btbcm wmi_bmof btmtk pcspkr mc libarc4 mtd ee1004 soundcore bluetooth drm_display_helper mei intel_pmc_core pmt_telemetry cec pmt_discovery pmt_class rc_core intel_hid intel_pmc_ssram_telemetry input_leds joydev i2c_algo_bit sparse_keymap intel_vsec acpi_pad acpi_tad mac_hid zfs(PO) nvme_fabrics nvme_core spl(O) nvme_keyring vhost_net nvme_auth vhost efi_pstore vhost_iotlb
Apr 02 08:32:13 pve2 kernel:  tap nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b_generic xor raid6_pq hid_generic rndis_host cdc_ether usbnet mii usbhid hid uas usb_storage dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio sdhci_pci r8169 sdhci_uhs2 realtek ahci intel_lpss_pci sdhci i2c_i801 xhci_pci cqhci spi_intel_pci i2c_mux libahci intel_lpss i2c_smbus xhci_hcd spi_intel idma64 video wmi pinctrl_alderlake
Apr 02 08:32:13 pve2 kernel: CPU: 1 UID: 0 PID: 50 Comm: kcompactd0 Tainted: P      D W  O L      6.17.13-2-pve #1 PREEMPT(voluntary)
Apr 02 08:32:13 pve2 kernel: Tainted: [P]=PROPRIETARY_MODULE, [D]=DIE, [W]=WARN, [O]=OOT_MODULE, [L]=SOFTLOCKUP
Apr 02 08:32:13 pve2 kernel: Hardware name: SZ ReachingTech Limited DreamQuest ADN2L/, BIOS 5.27 09/24/2024
Apr 02 08:32:13 pve2 kernel: RIP: 0010:native_queued_spin_lock_slowpath+0x81/0x2d0
Apr 02 08:32:13 pve2 kernel: Code: 00 00 f0 0f ba 2b 08 0f 92 c2 8b 03 0f b6 d2 c1 e2 08 30 e4 09 d0 3d ff 00 00 00 77 5f 85 c0 74 10 0f b6 03 84 c0 74 09 f3 90 <0f> b6 03 84 c0 75 f7 b8 01 00 00 00 66 89 03 5b 41 5c 41 5d 41 5e
Apr 02 08:32:13 pve2 kernel: RSP: 0018:ffffd0bb4029b7d8 EFLAGS: 00000202
Apr 02 08:32:13 pve2 kernel: RAX: 0000000000000001 RBX: fffffcc48cfed4a8 RCX: 000fffffffe00000
Apr 02 08:32:13 pve2 kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: fffffcc48cfed4a8
Apr 02 08:32:13 pve2 kernel: RBP: ffffd0bb4029b7f8 R08: 0000000000000000 R09: 0000000000000000
Apr 02 08:32:13 pve2 kernel: R10: ffffff8000000000 R11: 0000000000000000 R12: ffff8950c9b70860
Apr 02 08:32:13 pve2 kernel: R13: ffff8951bfb52ea8 R14: ffffd0bb4029b928 R15: ffffd0bb4029b820
Apr 02 08:32:13 pve2 kernel: FS:  0000000000000000(0000) GS:ffff895740e06000(0000) knlGS:0000000000000000
Apr 02 08:32:13 pve2 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Apr 02 08:32:13 pve2 kernel: CR2: 00007b69fa356330 CR3: 000000011143c004 CR4: 0000000000f72ef0
Apr 02 08:32:13 pve2 kernel: PKRU: 55555554
Apr 02 08:32:13 pve2 kernel: Call Trace:
Apr 02 08:32:13 pve2 kernel:  <TASK>
Apr 02 08:32:13 pve2 kernel:  _raw_spin_lock+0x3f/0x60
Apr 02 08:32:13 pve2 kernel:  __pte_offset_map_lock+0xa5/0x130
Apr 02 08:32:13 pve2 kernel:  page_vma_mapped_walk+0x826/0x990
Apr 02 08:32:13 pve2 kernel:  ? __lruvec_stat_mod_folio+0x8b/0xf0
Apr 02 08:32:13 pve2 kernel:  remove_migration_pte+0x86/0x7a0
Apr 02 08:32:13 pve2 kernel:  __rmap_walk_file+0xc3/0x1f0
Apr 02 08:32:13 pve2 kernel:  rmap_walk+0x43/0xa0
Apr 02 08:32:13 pve2 kernel:  migrate_pages_batch+0xc4c/0xec0
Apr 02 08:32:13 pve2 kernel:  ? __pfx_compaction_free+0x10/0x10
Apr 02 08:32:13 pve2 kernel:  ? __pfx_remove_migration_pte+0x10/0x10
Apr 02 08:32:13 pve2 kernel:  migrate_pages+0xb13/0xda0
Apr 02 08:32:13 pve2 kernel:  ? __pfx_compaction_free+0x10/0x10
Apr 02 08:32:13 pve2 kernel:  ? __pfx_compaction_alloc+0x10/0x10
Apr 02 08:32:13 pve2 kernel:  compact_zone+0xb8e/0x1150
Apr 02 08:32:13 pve2 kernel:  kcompactd_do_work+0x167/0x240
Apr 02 08:32:13 pve2 kernel:  kcompactd.cold+0x5b/0x9e
Apr 02 08:32:13 pve2 kernel:  ? __pfx_autoremove_wake_function+0x10/0x10
Apr 02 08:32:13 pve2 kernel:  ? __pfx_kcompactd+0x10/0x10
Apr 02 08:32:13 pve2 kernel:  kthread+0x108/0x220
Apr 02 08:32:13 pve2 kernel:  ? __pfx_kthread+0x10/0x10
Apr 02 08:32:13 pve2 kernel:  ret_from_fork+0x205/0x240
Apr 02 08:32:13 pve2 kernel:  ? __pfx_kthread+0x10/0x10
Apr 02 08:32:13 pve2 kernel:  ret_from_fork_asm+0x1a/0x30
Apr 02 08:32:13 pve2 kernel:  </TASK>
Apr 02 08:32:16 pve2 watchdog-mux[1307]: client watchdog expired - disable watchdog updates
Apr 02 08:32:17 pve2 watchdog-mux[1307]: exit watchdog-mux with active connections
Apr 02 08:32:17 pve2 systemd-journald[338]: Received client request to sync journal.
Apr 02 08:32:17 pve2 kernel: watchdog: watchdog0: watchdog did not stop!

The machine rebooted after that.
This could be related to memory fragmentation. do you use zfs ?