Opt-in Linux 6.8 Kernel for Proxmox VE 8 available on test & no-subscription

I have two different systems running Proxmox VE 8.2.2, a Dell PowerEdge 13th gen server and a Dell PowerEdge 14th gen server.
One is not able to passthrough the card, the other is able.

Both have a Dell HBA330 and Mellanox ConnectX-3 card that are configured for PCI Passthrough.

This is the relevant error on the Dell 13th gen server:

dmesg | grep -e DMAR -e IOMMU

[ 0.010869] ACPI: DMAR 0x000000007BAFE000 0000A0 (v01 DELL PE_SC3 00000001 DELL 00000001)
[ 0.010913] ACPI: Reserving DMAR table memory at [mem 0x7bafe000-0x7bafe09f]
[ 0.076946] DMAR: IOMMU enabled
[ 0.215487] DMAR: Host address width 46
[ 0.215489] DMAR: DRHD base: 0x000000fbffc000 flags: 0x1
[ 0.215501] DMAR: dmar0: reg_base_addr fbffc000 ver 1:0 cap 8d2078c106f0466 ecap f020df
[ 0.215504] DMAR: RMRR base: 0x00000069f0e000 end: 0x00000071f15fff
[ 0.215509] DMAR: ATSR flags: 0x0
[ 0.215512] DMAR-IR: IOAPIC id 8 under DRHD base 0xfbffc000 IOMMU 0
[ 0.215514] DMAR-IR: IOAPIC id 9 under DRHD base 0xfbffc000 IOMMU 0
[ 0.215516] DMAR-IR: HPET id 0 under DRHD base 0xfbffc000
[ 0.215517] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[ 0.215788] DMAR-IR: IRQ remapping was enabled on dmar0 but we are not in kdump mode
[ 0.215883] DMAR-IR: Enabled IRQ remapping in x2apic mode

[ 0.696318] DMAR: [Firmware Bug]: RMRR entry for device 02:00.0 is broken - applying workaround
[ 0.696344] DMAR: No SATC found
[ 0.696346] DMAR: dmar0: Using Queued invalidation
[ 0.700717] DMAR: Intel(R) Virtualization Technology for Directed I/O
[ 52.218990] vfio-pci 0000:02:00.0: Firmware has requested this device have a 1:1 IOMMU mapping, rejecting configuring the device without a 1:1 mapping. Contact your platform vendor.

Interesting, that is the first Intel system displaying this error. I initially thought it would be limited to AMD systems like my problematic one. I have 2 Intel servers that pass through just fine with this kernel.

For my AMD system, it’s not the X710 or E810 card, also happens with the on-board Intel NICs.
 
  • Like
Reactions: empyrean
So the issue was what I already suspected, the card in the 13th gen system wasn't flashed properly to HBA330 and was still flashed as a H330 Mini. Identical hardware but different firmware. It now works correctly !
 
Hi, I have an IBM BladeCenter system with HS22 blades, kernel 6.8.4-3 fails to detect disks with the mptsas module working fine on 6.5.13-5.

0b:00.0 SCSI storage controller: Broadcom / LSI SAS1064ET PCI-Express Fusion-MPT SAS (rev 10)
DeviceName: LSI SAS 1064E
Subsystem: IBM SAS1064ET PCI-Express Fusion-MPT SAS
Flags: bus master, fast devsel, latency 0, IRQ 24
I/O ports at 1000
Memory at 97910000 (64-bit, non-prefetchable) [size=16K]
Memory at 97900000 (64-bit, non-prefetchable) [size=64K]
Expansion ROM at <ignored> [disabled]
Capabilities: [50] Power Management version 2
Capabilities: [68] Express Endpoint, MSI 00
Capabilities: [98] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [b0] MSI-X: Enable- Count=1 Masked-
Capabilities: [100] Advanced Error Reporting
Kernel driver in use: mptsas
Kernel modules: mptsas

on the site https://lore.kernel.org/all/d45631a.../T/#m95bc455cffb4a57ceafdf47f349b0de293ce179c
describes the problem with the module and at the end comments on a solution in kernel 6.9.0-rc3_64+
 
now testing 6.8.4-3 in a DELL R420 with 2x E5-24XX (10C/20T each CPU), 64 GB RAM, 2 set of RAID SAS drives on a H310 mini SAS controller... at least its BOOT without a problem so far.
 
Updated to kernel 6.8 a while back. Now ZFS performance has dropped. The fun thing is that to me it seems like performance is better (like maybe) but backups take in the 10% range longer to complete. Performance is better: Less io-wait, maybe but throughput is worse i'd wager. Starting the box and all of the virtual machines took a lot longer. I have NOT gone through any benchmarking yet.

Using a 8 disk rustheap draid1:6d:8c:1s with a SLOG NVME. And 2 SATA SSD mirror SLOG NVME.

Does anyone have any similar observations?

Then another thing. The motherboard is a ASUS Prime X670-P Wifi. For some reason the power led is acting like a heartbeat or whatever. When was this introduced? Thought the box went into hibernation or something.

Noticed that available CPU governors are just powersave and performance on an AMD Ryzen 7950X. I used "cpufreq.default_governor=conservative" which obviously is not available anymore. Googled and found out that there was some changes in the kernel regarding the power governing on AMD. No notes about this either in changelogs. Seems like a major change.

Changelogs seem to work if they work.
 
6.8.4-3-pve, node freezing randomly, is this enough log , or should i get full log

Code:
May 18 20:39:26 cpve01 pvestatd[1806]: status update time (5.111 seconds)
May 18 20:39:36 cpve01 kernel: INFO: task fn_monstore:1712 blocked for more than 491 seconds.
May 18 20:39:36 cpve01 kernel:       Tainted: P        W  O       6.8.4-3-pve #1
May 18 20:39:36 cpve01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 18 20:39:36 cpve01 kernel: task:fn_monstore     state:D stack:0     pid:1712  tgid:1651  ppid:1     >
May 18 20:39:36 cpve01 kernel: Call Trace:
May 18 20:39:36 cpve01 kernel:  <TASK>
May 18 20:39:36 cpve01 kernel:  __schedule+0x401/0x15e0
May 18 20:39:36 cpve01 kernel:  ? ttwu_queue_wakelist+0x101/0x110
May 18 20:39:36 cpve01 kernel:  schedule+0x33/0x110
May 18 20:39:36 cpve01 kernel:  io_schedule+0x46/0x80
May 18 20:39:36 cpve01 kernel:  cv_wait_common+0xac/0x140 [spl]
May 18 20:39:36 cpve01 kernel:  ? __pfx_autoremove_wake_function+0x10/0x10
May 18 20:39:36 cpve01 kernel:  __cv_wait_io+0x18/0x30 [spl]
May 18 20:39:36 cpve01 kernel:  txg_wait_open+0xa1/0x100 [zfs]
May 18 20:39:36 cpve01 kernel:  dmu_free_long_range+0x450/0x500 [zfs]
May 18 20:39:36 cpve01 kernel:  zfs_rmnode+0x34a/0x460 [zfs]
May 18 20:39:36 cpve01 kernel:  zfs_zinactive+0xf2/0x100 [zfs]
May 18 20:39:36 cpve01 kernel:  zfs_inactive+0x9c/0x250 [zfs]
May 18 20:39:36 cpve01 kernel:  zpl_evict_inode+0x43/0x60 [zfs]
May 18 20:39:36 cpve01 kernel:  evict+0xc5/0x1d0
May 18 20:39:36 cpve01 kernel:  iput+0x14b/0x260
May 18 20:39:36 cpve01 kernel:  dentry_unlink_inode+0xd4/0x150
May 18 20:39:36 cpve01 kernel:  __dentry_kill+0x73/0x180
May 18 20:39:36 cpve01 kernel:  dput+0xf2/0x1b0
May 18 20:39:36 cpve01 kernel:  do_renameat2+0x3fd/0x680
May 18 20:39:36 cpve01 kernel:  __x64_sys_rename+0x44/0x60
May 18 20:39:36 cpve01 kernel:  x64_sys_call+0x1f0c/0x24b0
May 18 20:39:36 cpve01 kernel:  do_syscall_64+0x81/0x170
May 18 20:39:36 cpve01 kernel:  ? do_syscall_64+0x8d/0x170
May 18 20:39:36 cpve01 kernel:  ? do_syscall_64+0x8d/0x170
May 18 20:39:36 cpve01 kernel:  ? do_syscall_64+0x8d/0x170
May 18 20:39:36 cpve01 kernel:  ? irqentry_exit_to_user_mode+0x7b/0x260
May 18 20:39:36 cpve01 kernel:  ? irqentry_exit+0x43/0x50
May 18 20:39:36 cpve01 kernel:  entry_SYSCALL_64_after_hwframe+0x78/0x80
May 18 20:39:36 cpve01 kernel: RIP: 0033:0x781b60c7fa87
May 18 20:39:36 cpve01 kernel: RSP: 002b:0000781b539fa118 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
May 18 20:39:36 cpve01 kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 0000781b60c7fa87
May 18 20:39:36 cpve01 kernel: RDX: 0000000000000000 RSI: 0000781b539fa180 RDI: 0000781b539fb180
May 18 20:39:36 cpve01 kernel: RBP: 00005cd233615a00 R08: 0000000000000000 R09: 0000000000000073
May 18 20:39:36 cpve01 kernel: R10: 0000000000000180 R11: 0000000000000246 R12: 0000000000000024
May 18 20:39:36 cpve01 kernel: R13: 0000000000000006 R14: 0000781b539fc400 R15: 0000781b539fb180
May 18 20:39:36 cpve01 kernel:  </TASK>
May 18 20:39:36 cpve01 kernel: INFO: task pvescheduler:224122 blocked for more than 491 seconds.
May 18 20:39:36 cpve01 kernel:       Tainted: P        W  O       6.8.4-3-pve #1
May 18 20:39:36 cpve01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 18 20:39:36 cpve01 kernel: task:pvescheduler    state:D stack:0     pid:224122 tgid:224122 ppid:2347>
May 18 20:39:36 cpve01 kernel: Call Trace:
May 18 20:39:36 cpve01 kernel:  <TASK>
May 18 20:39:36 cpve01 kernel:  __schedule+0x401/0x15e0
May 18 20:39:36 cpve01 kernel:  ? ttwu_queue_wakelist+0x101/0x110
May 18 20:39:36 cpve01 kernel:  schedule+0x33/0x110
May 18 20:39:36 cpve01 kernel:  io_schedule+0x46/0x80
May 18 20:39:36 cpve01 kernel:  cv_wait_common+0xac/0x140 [spl]
May 18 20:39:36 cpve01 kernel:  ? __pfx_autoremove_wake_function+0x10/0x10
May 18 20:39:36 cpve01 kernel:  __cv_wait_io+0x18/0x30 [spl]
May 18 20:39:36 cpve01 kernel:  txg_wait_open+0xa1/0x100 [zfs]
May 18 20:39:36 cpve01 kernel:  dmu_free_long_range+0x450/0x500 [zfs]
May 18 20:39:36 cpve01 kernel:  zfs_rmnode+0x34a/0x460 [zfs]
May 18 20:39:36 cpve01 kernel:  zfs_zinactive+0xf2/0x100 [zfs]
May 18 20:39:36 cpve01 kernel:  zfs_inactive+0x9c/0x250 [zfs]
May 18 20:39:36 cpve01 kernel:  zpl_evict_inode+0x43/0x60 [zfs]
May 18 20:39:36 cpve01 kernel:  evict+0xc5/0x1d0
May 18 20:39:36 cpve01 kernel:  iput+0x14b/0x260
May 18 20:39:36 cpve01 kernel:  dentry_unlink_inode+0xd4/0x150
May 18 20:39:36 cpve01 kernel:  __dentry_kill+0x73/0x180
May 18 20:39:36 cpve01 kernel:  dput+0xf2/0x1b0
May 18 20:39:36 cpve01 kernel:  do_renameat2+0x3fd/0x680
May 18 20:39:36 cpve01 kernel:  __x64_sys_rename+0x44/0x60
May 18 20:39:36 cpve01 kernel:  x64_sys_call+0x1f0c/0x24b0
May 18 20:39:36 cpve01 kernel:  do_syscall_64+0x81/0x170
May 18 20:39:36 cpve01 kernel:  ? irqentry_exit+0x43/0x50
May 18 20:39:36 cpve01 kernel:  ? exc_page_fault+0x94/0x1b0
May 18 20:39:36 cpve01 kernel:  entry_SYSCALL_64_after_hwframe+0x78/0x80
May 18 20:39:36 cpve01 kernel: RIP: 0033:0x75173dffca87
May 18 20:39:36 cpve01 kernel: RSP: 002b:00007ffd70e9e198 EFLAGS: 00000202 ORIG_RAX: 0000000000000052
May 18 20:39:36 cpve01 kernel: RAX: ffffffffffffffda RBX: 00005769894d22a0 RCX: 000075173dffca87
May 18 20:39:36 cpve01 kernel: RDX: 0000000000000400 RSI: 000057698d82ac90 RDI: 000057698fd892e0
May 18 20:39:36 cpve01 kernel: RBP: 00005769894d7c88 R08: 0000000000000003 R09: 0000000000000000
May 18 20:39:36 cpve01 kernel: R10: 000075173dfbb508 R11: 0000000000000202 R12: 00005769894d7c90
May 18 20:39:36 cpve01 kernel: R13: 000057698a053ae0 R14: 000057698fd892e0 R15: 000057698d82ac90
May 18 20:39:36 cpve01 kernel:  </TASK>
May 18 20:39:36 cpve01 pvestatd[1806]: got timeout
 
Last edited:
When updating Kernel from 6.8.4-2-pve to 6.8.4-3-pve the network stops working.

With 6.8.4-3-pve there is no eno1 interface anymore. With 6.8.4-2-pve it just works fine.
NIC is the following:

Code:
00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (Lewisville) (rev 05)
 
  • Like
Reactions: Der Harry
When updating Kernel from 6.8.4-2-pve to 6.8.4-3-pve the network stops working.

With 6.8.4-3-pve there is no eno1 interface anymore. With 6.8.4-2-pve it just works fine.
NIC is the following:

Read this: https://pve.proxmox.com/wiki/Network_Configuration chapter "Overriding network device names"

We never know how future kernels will handle names, but we can pin them.

You can have your own symbolic names e.g. wan0, lan0, ...
 
Check with ip a what name it has currently & adjust config accordingly.
Unfortunately there is no other device.

With 6.8.4-2-pve I see 6 devices using "ip a", including eno1
With 6.8.4-3-pve I see only 5 devices and no replacement for eno1
 
Unfortunately there is no other device.

With 6.8.4-2-pve I see 6 devices using "ip a", including eno1
With 6.8.4-3-pve I see only 5 devices and no replacement for eno1
lspci -> something with ethernet
dmesg -> something with the mac of the card
 
lsmod would be interessting - you might can fix it with an insmod

Edit: Probably sidebar that from >> this << thread
 
Updated to kernel 6.8 a while back. Now ZFS performance has dropped. The fun thing is that to me it seems like performance is better (like maybe) but backups take in the 10% range longer to complete. Performance is better: Less io-wait, maybe but throughput is worse i'd wager. Starting the box and all of the virtual machines took a lot longer. I have NOT gone through any benchmarking yet.

Using a 8 disk rustheap draid1:6d:8c:1s with a SLOG NVME. And 2 SATA SSD mirror SLOG NVME.

Does anyone have any similar observations?

Then another thing. The motherboard is a ASUS Prime X670-P Wifi. For some reason the power led is acting like a heartbeat or whatever. When was this introduced? Thought the box went into hibernation or something.

Noticed that available CPU governors are just powersave and performance on an AMD Ryzen 7950X. I used "cpufreq.default_governor=conservative" which obviously is not available anymore. Googled and found out that there was some changes in the kernel regarding the power governing on AMD. No notes about this either in changelogs. Seems like a major change.

Changelogs seem to work if they work.
Replying to myself:

Did a simple "echo 0 >/sys/module/zfs/parameters/zfs_prefetch_disable".
Seems like something has changed in the behaviour of ZFS.
 
Still running into kernel issues occasionally. In this case while stopping an LXC debian container (fresh install nothing but tailscale).
ASRock Z790 Pro RS/Z790 Pro RS
Intel i7 14700
ZFS storage only mounted for some containers, neither used on host nor the container from this log.

Code:
May 19 17:41:33 pve pct[563259]: <root@pam> starting task UPID:pve:0008983C:007C8826:664A1DAD:vzstop:108:root@pam:
May 19 17:41:33 pve pct[563260]: stopping CT 108: UPID:pve:0008983C:007C8826:664A1DAD:vzstop:108:root@pam:
May 19 17:41:33 pve kernel: general protection fault, probably for non-canonical address 0xfffb9ef747d32180: 0000 [#3] PREEMPT SMP NOPTI
May 19 17:41:33 pve kernel: CPU: 0 PID: 3119 Comm: dbus-daemon Tainted: P      D    O       6.8.4-3-pve #1
May 19 17:41:33 pve kernel: Hardware name: ASRock Z790 Pro RS/Z790 Pro RS, BIOS 11.11 04/09/2024
May 19 17:41:33 pve kernel: RIP: 0010:refill_obj_stock+0x56/0x1c0
May 19 17:41:33 pve kernel: Code: 40 0d 03 00 65 4c 03 25 38 ec 3a 63 49 8b 44 24 10 4c 39 f8 0f 84 ae 00 00 00 4c 89 e7 e8 52 f1 ff ff 49 89 c6 e8 ea 39 d4 ff <49> 8b 07 a8 03 0f 85 fd 00 00 00 65 48 ff 00 e8 56 76 d4 ff 4d 89
May 19 17:41:33 pve kernel: RSP: 0018:ffffab9ae0c27a88 EFLAGS: 00010046
May 19 17:41:33 pve kernel: RAX: 0000000000000000 RBX: 0000000000000030 RCX: 0000000000000000
May 19 17:41:33 pve kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 19 17:41:33 pve kernel: RBP: ffffab9ae0c27ab0 R08: 0000000000000000 R09: 0000000000000000
May 19 17:41:33 pve kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffff9f044ea30d40
May 19 17:41:33 pve kernel: R13: 0000000000000216 R14: ffff9ef747d32180 R15: fffb9ef747d32180
May 19 17:41:33 pve kernel: FS:  0000000000000000(0000) GS:ffff9f044ea00000(0000) knlGS:0000000000000000
May 19 17:41:33 pve kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 19 17:41:33 pve kernel: CR2: 000058a415628000 CR3: 0000000c6c436000 CR4: 0000000000f50ef0
May 19 17:41:33 pve kernel: PKRU: 55555554
May 19 17:41:33 pve kernel: Call Trace:
May 19 17:41:33 pve kernel:  <TASK>
May 19 17:41:33 pve kernel:  ? show_regs+0x6d/0x80
May 19 17:41:33 pve kernel:  ? die_addr+0x37/0xa0
May 19 17:41:33 pve kernel:  ? exc_general_protection+0x1db/0x480
May 19 17:41:33 pve kernel:  ? asm_exc_general_protection+0x27/0x30
May 19 17:41:33 pve kernel:  ? refill_obj_stock+0x56/0x1c0
May 19 17:41:33 pve kernel:  ? refill_obj_stock+0x56/0x1c0
May 19 17:41:33 pve kernel:  obj_cgroup_uncharge+0x13/0x20
May 19 17:41:33 pve kernel:  __memcg_slab_free_hook+0xd2/0x180
May 19 17:41:33 pve kernel:  ? __vm_area_free+0x47/0x80
May 19 17:41:33 pve kernel:  kmem_cache_free+0x36c/0x3f0
May 19 17:41:33 pve kernel:  __vm_area_free+0x47/0x80
May 19 17:41:33 pve kernel:  remove_vma+0x60/0x90
May 19 17:41:33 pve kernel:  exit_mmap+0x1f3/0x3f0
May 19 17:41:33 pve kernel:  __mmput+0x41/0x140
May 19 17:41:33 pve kernel:  mmput+0x31/0x40
May 19 17:41:33 pve kernel:  do_exit+0x324/0xae0
May 19 17:41:33 pve kernel:  ? schedule_hrtimeout_range_clock+0x124/0x130
May 19 17:41:33 pve kernel:  do_group_exit+0x35/0x90
May 19 17:41:33 pve kernel:  get_signal+0xa8d/0xa90
May 19 17:41:33 pve kernel:  arch_do_signal_or_restart+0x42/0x280
May 19 17:41:33 pve kernel:  syscall_exit_to_user_mode+0x206/0x260
May 19 17:41:33 pve kernel:  do_syscall_64+0x8d/0x170
May 19 17:41:33 pve kernel:  ? do_syscall_64+0x8d/0x170
May 19 17:41:33 pve kernel:  ? syscall_exit_to_user_mode+0x86/0x260
May 19 17:41:33 pve kernel:  ? do_syscall_64+0x8d/0x170
May 19 17:41:33 pve kernel:  ? do_syscall_64+0x8d/0x170
May 19 17:41:33 pve kernel:  ? do_syscall_64+0x8d/0x170
May 19 17:41:33 pve kernel:  ? irqentry_exit+0x43/0x50
May 19 17:41:33 pve kernel:  entry_SYSCALL_64_after_hwframe+0x78/0x80
May 19 17:41:33 pve kernel: RIP: 0033:0x760113090de3
May 19 17:41:33 pve kernel: Code: Unable to access opcode bytes at 0x760113090db9.
May 19 17:41:33 pve kernel: RSP: 002b:00007ffd9fe57fb8 EFLAGS: 00000202 ORIG_RAX: 00000000000000e8
May 19 17:41:33 pve kernel: RAX: fffffffffffffffc RBX: 00007ffd9fe58350 RCX: 0000760113090de3
May 19 17:41:33 pve kernel: RDX: 0000000000000040 RSI: 00007ffd9fe57fc0 RDI: 0000000000000004
May 19 17:41:33 pve kernel: RBP: 000062699e9a5b28 R08: 0000000000000000 R09: 0000000000000000
May 19 17:41:33 pve kernel: R10: 00000000ffffffff R11: 0000000000000202 R12: ffffffffffffffff
May 19 17:41:33 pve kernel: R13: 000062699e9c0a60 R14: 0000000000000000 R15: 0000000000000001
May 19 17:41:33 pve kernel:  </TASK>
May 19 17:41:33 pve kernel: Modules linked in: tcp_diag inet_diag xt_conntrack xt_tcpudp xt_mark nft_compat nft_chain_nat cfg80211 veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw xt_MASQUERADE nf_tables ip6table_nat bo>
May 19 17:41:33 pve kernel:  snd_pcm_dmaengine polyval_generic ghash_clmulni_intel sha256_ssse3 sha1_ssse3 snd_hda_intel aesni_intel snd_intel_dspcfg crypto_simd snd_intel_sdw_acpi cryptd snd_hda_codec snd_hda_core drm_buddy ttm sn>
May 19 17:41:33 pve kernel: ---[ end trace 0000000000000000 ]---
May 19 17:41:33 pve kernel: RIP: 0010:zio_done+0x342/0x10b0 [zfs]
May 19 17:41:33 pve kernel: Code: a0 e9 f1 00 00 00 49 8b b7 30 01 00 00 48 8b 45 c0 4c 8b 24 32 49 39 c4 0f 84 3c 02 00 00 49 29 f4 4d 85 e4 0f 84 16 0c 00 00 <4d> 8b 34 24 4c 89 fe 4c 89 ef e8 3f 68 ff ff 49 8d 85 b0 03 00 00
May 19 17:41:33 pve kernel: RSP: 0018:ffffab9acf3a3d28 EFLAGS: 00010286
May 19 17:41:33 pve kernel: RAX: ffff9ef7b1acae38 RBX: ffff9ef7b1acb0d0 RCX: ffff9ef7b1acae38
May 19 17:41:33 pve kernel: RDX: ffff9ef7b8b1b450 RSI: 0000000000000010 RDI: 0000000000000000
May 19 17:41:33 pve kernel: RDX: ffff9ef7b8b1b450 RSI: 0000000000000010 RDI: 0000000000000000
May 19 17:41:33 pve kernel: RBP: ffffab9acf3a3d88 R08: 0000000000000000 R09: 0000000000000000
May 19 17:41:33 pve kernel: R10: 0000000000000000 R11: 0000000000000000 R12: fffb9ef7b1acae28
May 19 17:41:33 pve kernel: R13: ffff9ef7aababc00 R14: ffff9ef7b1acb0b0 R15: ffff9ef7b1acad00
May 19 17:41:33 pve kernel: FS:  0000000000000000(0000) GS:ffff9f044ea00000(0000) knlGS:0000000000000000
May 19 17:41:33 pve kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 19 17:41:33 pve kernel: RDX: ffff9ef7b8b1b450 RSI: 0000000000000010 RDI: 0000000000000000
May 19 17:41:33 pve kernel: RDX: ffff9ef7b8b1b450 RSI: 0000000000000010 RDI: 0000000000000000
May 19 17:41:33 pve kernel: RBP: ffffab9acf3a3d88 R08: 0000000000000000 R09: 0000000000000000
May 19 17:41:33 pve kernel: R10: 0000000000000000 R11: 0000000000000000 R12: fffb9ef7b1acae28
May 19 17:41:33 pve kernel: R13: ffff9ef7aababc00 R14: ffff9ef7b1acb0b0 R15: ffff9ef7b1acad00
May 19 17:41:33 pve kernel: FS:  0000000000000000(0000) GS:ffff9f044ea00000(0000) knlGS:0000000000000000
May 19 17:41:33 pve kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 19 17:41:33 pve kernel: CR2: 000058a415628000 CR3: 0000000345e3e000 CR4: 0000000000f50ef0
May 19 17:41:33 pve kernel: PKRU: 55555554
May 19 17:41:33 pve kernel: note: dbus-daemon[3119] exited with irqs disabled
May 19 17:41:33 pve kernel: Fixing recursive fault but reboot is needed!
May 19 17:41:33 pve kernel: BUG: scheduling while atomic: dbus-daemon/3119/0x00000000
May 19 17:41:33 pve kernel: Modules linked in: tcp_diag inet_diag xt_conntrack xt_tcpudp xt_mark nft_compat nft_chain_nat cfg80211 veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw xt_MASQUERADE nf_tables ip6table_nat bo>
May 19 17:41:33 pve kernel:  snd_pcm_dmaengine polyval_generic ghash_clmulni_intel sha256_ssse3 sha1_ssse3 snd_hda_intel aesni_intel snd_intel_dspcfg crypto_simd snd_intel_sdw_acpi cryptd snd_hda_codec snd_hda_core drm_buddy ttm sn>
May 19 17:41:33 pve kernel: CPU: 0 PID: 3119 Comm: dbus-daemon Tainted: P      D    O       6.8.4-3-pve #1
May 19 17:41:33 pve kernel: Hardware name: ASRock Z790 Pro RS/Z790 Pro RS, BIOS 11.11 04/09/2024
May 19 17:41:33 pve kernel: Call Trace:
May 19 17:41:33 pve kernel:  <TASK>
May 19 17:41:33 pve kernel:  dump_stack_lvl+0x48/0x70
May 19 17:41:33 pve kernel:  dump_stack+0x10/0x20
May 19 17:41:33 pve kernel:  __schedule_bug+0x64/0x80
May 19 17:41:33 pve kernel:  __schedule+0x10f1/0x15e0
May 19 17:41:33 pve kernel:  ? vprintk+0x42/0x80
May 19 17:41:33 pve kernel:  ? _printk+0x60/0x90
May 19 17:41:33 pve kernel:  do_task_dead+0x44/0x50
May 19 17:41:33 pve kernel:  make_task_dead+0x14c/0x170
May 19 17:41:33 pve kernel:  rewind_stack_and_make_dead+0x17/0x20
May 19 17:41:33 pve kernel: RIP: 0033:0x760113090de3
May 19 17:41:33 pve kernel: Code: Unable to access opcode bytes at 0x760113090db9.
May 19 17:41:33 pve kernel: RSP: 002b:00007ffd9fe57fb8 EFLAGS: 00000202 ORIG_RAX: 00000000000000e8
May 19 17:41:33 pve kernel: RAX: fffffffffffffffc RBX: 00007ffd9fe58350 RCX: 0000760113090de3
May 19 17:41:33 pve kernel: RDX: 0000000000000040 RSI: 00007ffd9fe57fc0 RDI: 0000000000000004
May 19 17:41:33 pve kernel: RBP: 000062699e9a5b28 R08: 0000000000000000 R09: 0000000000000000
May 19 17:41:33 pve kernel: R10: 00000000ffffffff R11: 0000000000000202 R12: ffffffffffffffff
May 19 17:41:33 pve kernel: R13: 000062699e9c0a60 R14: 0000000000000000 R15: 0000000000000001
May 19 17:41:33 pve kernel:  </TASK>
May 19 17:41:33 pve kernel: ------------[ cut here ]------------
May 19 17:41:33 pve kernel: Voluntary context switch within RCU read-side critical section!
May 19 17:41:33 pve kernel: WARNING: CPU: 0 PID: 3119 at kernel/rcu/tree_plugin.h:320 rcu_note_context_switch+0x46f/0x590
May 19 17:41:33 pve kernel: Modules linked in: tcp_diag inet_diag xt_conntrack xt_tcpudp xt_mark nft_compat nft_chain_nat cfg80211 veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw xt_MASQUERADE nf_tables ip6table_nat bo>
May 19 17:41:33 pve kernel:  snd_pcm_dmaengine polyval_generic ghash_clmulni_intel sha256_ssse3 sha1_ssse3 snd_hda_intel aesni_intel snd_intel_dspcfg crypto_simd snd_intel_sdw_acpi cryptd snd_hda_codec snd_hda_core drm_buddy ttm sn>
May 19 17:41:33 pve kernel: CPU: 0 PID: 3119 Comm: dbus-daemon Tainted: P      D W  O       6.8.4-3-pve #1
May 19 17:41:33 pve kernel: Hardware name: ASRock Z790 Pro RS/Z790 Pro RS, BIOS 11.11 04/09/2024
May 19 17:41:33 pve kernel: RIP: 0010:rcu_note_context_switch+0x46f/0x590
May 19 17:41:33 pve kernel: Code: 00 00 49 8b 45 08 49 39 45 18 0f 85 3f fe ff ff 0f 0b e9 38 fe ff ff 48 c7 c7 a8 ea 18 9e c6 05 9c d2 37 02 01 e8 d1 24 f3 ff <0f> 0b e9 f5 fb ff ff a9 ff ff ff 7f 0f 84 9d fc ff ff 65 48 8b 3c
May 19 17:41:33 pve kernel: RSP: 0018:ffffab9ae0c27e28 EFLAGS: 00010046
May 19 17:41:33 pve kernel: RAX: 0000000000000000 RBX: ffff9f044ea35a40 RCX: 0000000000000000
May 19 17:41:33 pve kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
May 19 17:41:33 pve kernel: RBP: ffffab9ae0c27e48 R08: 0000000000000000 R09: 0000000000000000
May 19 17:41:33 pve kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
May 19 17:41:33 pve kernel: R13: 0000000000000000 R14: ffff9ef7405da900 R15: 0000000000000000
May 19 17:41:33 pve kernel: FS:  0000000000000000(0000) GS:ffff9f044ea00000(0000) knlGS:0000000000000000
May 19 17:41:33 pve kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
May 19 17:41:33 pve kernel: CR2: 000058a415628000 CR3: 0000000345e3e000 CR4: 0000000000f50ef0
May 19 17:41:33 pve kernel: PKRU: 55555554
May 19 17:41:33 pve kernel: Call Trace:
May 19 17:41:33 pve kernel:  <TASK>
May 19 17:41:33 pve kernel:  ? show_regs+0x6d/0x80
May 19 17:41:33 pve kernel:  ? __warn+0x89/0x160
May 19 17:41:33 pve kernel:  ? rcu_note_context_switch+0x46f/0x590
May 19 17:41:33 pve kernel:  ? report_bug+0x17e/0x1b0
May 19 17:41:33 pve kernel:  ? handle_bug+0x46/0x90
May 19 17:41:33 pve kernel:  ? exc_invalid_op+0x18/0x80
May 19 17:41:33 pve kernel:  ? asm_exc_invalid_op+0x1b/0x20
May 19 17:41:33 pve kernel:  ? rcu_note_context_switch+0x46f/0x590
May 19 17:41:33 pve kernel:  ? rcu_note_context_switch+0x46f/0x590
May 19 17:41:33 pve kernel:  __schedule+0xbe/0x15e0
May 19 17:41:33 pve kernel:  ? vprintk+0x42/0x80
May 19 17:41:33 pve kernel:  ? _printk+0x60/0x90
May 19 17:41:33 pve kernel:  do_task_dead+0x44/0x50
May 19 17:41:33 pve kernel:  make_task_dead+0x14c/0x170
May 19 17:41:33 pve kernel:  rewind_stack_and_make_dead+0x17/0x20
May 19 17:41:33 pve kernel: RIP: 0033:0x760113090de3
May 19 17:41:33 pve kernel: Code: Unable to access opcode bytes at 0x760113090db9.
May 19 17:41:33 pve kernel: RSP: 002b:00007ffd9fe57fb8 EFLAGS: 00000202 ORIG_RAX: 00000000000000e8
May 19 17:41:33 pve kernel: RAX: fffffffffffffffc RBX: 00007ffd9fe58350 RCX: 0000760113090de3
May 19 17:41:33 pve kernel: RDX: 0000000000000040 RSI: 00007ffd9fe57fc0 RDI: 0000000000000004
May 19 17:41:33 pve kernel: RBP: 000062699e9a5b28 R08: 0000000000000000 R09: 0000000000000000
May 19 17:41:33 pve kernel: R10: 00000000ffffffff R11: 0000000000000202 R12: ffffffffffffffff
May 19 17:41:33 pve kernel: R13: 000062699e9c0a60 R14: 0000000000000000 R15: 0000000000000001
May 19 17:41:33 pve kernel:  </TASK>
May 19 17:41:33 pve kernel: ---[ end trace 0000000000000000 ]---
 
Still running into kernel issues occasionally. In this case while stopping an LXC debian container (fresh install nothing but tailscale).

I know what the problem is. I can fix it :) Nobody want's to help.

So that's where we are - after 3 weeks since 8.2.2.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!