Opt-in Linux Kernel 5.15 for Proxmox VE 7.x available

As the Proxmox 5.13 Kernel is affected by this Mellanox issue, too, it would be great if the fix could be backported to Proxmox Kernel 5.13, too.
sure, I try to keep that backport request in mind for the next 5.13 bump, the next 5.15 based kernel bump should catch up with upstream releases again and thus will get it automatically anyway.
 
  • Like
Reactions: jsterr and wefinet
I'll leave my warning here in case anyone tries to do this configuration.

OCFS2 file system with more than 1 node is not working with Proxmox 7.1.2 with 5.13 and 5.15 kernels. OCFS2 is a shared cluster filesystem, so having only 1 node with this FS doesn't make sense, so I consider that its use is currently unavailable.

This is because the "ocfs2-tools" that is in proxmox is version 8.6.6 and something changed in kernel 5.13 (stable) and 5.15 that stopped working OCFS2. When I put the 8.7.1 (debian testing) version of ocfs-tools, the file system worked correctly again.

How to reproduce the problem:
1 - Configure ocfs2 on the volume;
2 - mount the volume with the "mount" command on the first node
3 - mount the volume with the command "mount" in the second node" (at this moment it will grab and it will be without response and messages in the syslog of failures, and the client will be stuck waiting for the mounting that will not happen)

Tested kernels to help developers:

4.9.0-13-amd64 (stretch stable) + ocfs2-tools 8.6.6 (stable) = works
4.9.0-13-amd64 (stretch stable) + ocfs2-tools 8.7.1 (sid) = works

5.10.0-11-amd64 (bullseye stable) + ocfs2-tools 8.6.6 (stable) = DOES NOT WORK
5.10.0-11-amd64 (bullseye stable) + ocfs2-tools 8.7.1 (sid) = works
pve-kernel-5.10.6-1-pve (proxmox stable) + ocfs2-tools 8.6.6 (stable) = works
pve-kernel-5.10.6-1-pve (proxmox stable) + ocfs2-tools 8.7.1 (sid) = works

pve-kernel-5.13.19-4-pve (proxmox stable) + ocfs2-tools 8.6.6 (stable) = DOES NOT WORK
pve-kernel-5.13.19-4-pve (proxmox stable) + ocfs2-tools 8.7.1 (sid) = works

5.14.0-0.bpo.2-amd64 (bullseys backports) + ocfs2-tools 8.7.1 (sid) = DOES NOT WORK

5.15.0-0.bpo.3-amd64 (bullseys backports) + ocfs2-tools 8.6.6 (stable) = DOES NOT WORK
5.15.0-0.bpo.3-amd64 (bullseys backports) + ocfs2-tools 8.7.1 (sid) = DOES NOT WORK
pve-kernel-5.15.19-2-pve (proxmox testing) + ocfs2-tools 8.6.6 (stable) = DOES NOT WORK
pve-kernel-5.15.19-2-pve (proxmox testing) + ocfs2-tools 8.7.1 (sid) = DOES NOT WORK

5.17-rc5-amd64 (experimental debian) + ocfs2-tools 8.6.6 (stable) = DOES NOT WORK
5.17-rc5-amd64 (experimental debian) + ocfs2-tools 8.7.1 (sid) = DOES NOT WORK

There are two possibilities to work around the problem in proxmox:
1 - Make ocfs-tools 8.7.1 available in the distribution by default; or
2 - Fix ocfs-tools bug to support new kernels; or
3 - The user is using the "testing" version of debian.

Hope this helps anyone who has the same problem. It would be interesting to report this to the developers of OCFS2 or the debian distribution, but I've never done it and I don't know how to do it. If anyone knows, they can help with this.
 
Last edited:
I have been trying kernel 5.15.19-2 and I still get the error about BAR 0 - Can't reserve memory. When I cat out /proc/iomem I can see that "BOOTFB" has taken up the memory that it is trying to reserve. I am using kernel 5.11.22-7-pve and that works fine except that certain things fail in the Windows guest. The command line I am using is
Code:
BOOT_IMAGE=/boot/vmlinuz-5.11.22-7-pve root=/dev/mapper/pve-root ro quiet quiet amd_iommu=on iommu=pt video=simplefb:off

I can gather more information if needed.
https://github.com/furkanmustafa/forcefully-remove-bootfb

see this

up to 5.15.7-1 (or a few version later) where it uses efbib, the memory can be unloaded with efbib:off. But with newer kernel replaced with simplefb, even with simplefb:off and unloaded simplfb, there will be the BOOTFB occupying the memory address.

So I still using 5.15.7-1, until the force remove fb patch in place.
 
Last edited:
This is because the "ocfs2-tools" that is in proxmox is version 8.6.6 and something changed in kernel 5.13 (stable) and 5.15 that stopped working OCFS2. When I put the 8.7.1 (debian testing) version of ocfs-tools, the file system worked correctly again.
Thanks for the hint, we backported + clean-rebuild ocfs2-tools as 1.8.7-1~bpo11+1, currently available on the pvetest repository.
 
On a host running no-subscription repo with kernel 5.15 I see following message:
Code:
kernel: Unknown kernel command line parameters "BOOT_IMAGE=/vmlinuz-5.15.19-2-pve boot=zfs", will be passed to user space.
This host uses grub with proxmox-boot-tool.
Now I found this commit [1] and a related post [2].
So this seems just a warning message. The commit seems introduced since kernel 5.14. On a host running pve-enterprise repo with kernel 5.13 this message doesn't appear in the log.
Could someone please verify if this is just a warning message and there's nothing to worry about?

@t.lamprecht do you have any thoughts about this?

[1] https://git.kernel.org/pub/scm/linu.../?id=86d1919a4fb0d9c115dd1d3b969f5d1650e45408
[2] https://bbs.archlinux.org/viewtopic.php?id=269637
 
Last edited:
https://github.com/furkanmustafa/forcefully-remove-bootfb

see this

up to 5.15.7-1 (or a few version later) where it uses efbib, the memory can be unloaded with efbib:eek:ff. But with newer kernel replaced with simplefb, even with simplefb:eek:ff and unloaded simplfb, there will be the BOOTFB occupying the memory address.

So I still using 5.15.7-1, until the force remove fb patch in place.
I was never able to get the forcefully-remove-bootfb tool to compile. Not 100% sure what I did wrong. But what you describe is exactly what I am facing.

Thanks.
 
Do I have to have a subscription to install the 5.15 kernel?

I'm trying to install it from shell:
Code:
apt update && apt install pve-kernel-5.15

But it gives me these errors
Code:
E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/bullseye/InRelease  401  Unauthorized [IP: 51.91.38.34 443]
E: The repository 'https://enterprise.proxmox.com/debian/pve bullseye InRelease' is no longer signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
 
These are my files.

/etc/apt/sources.list
Code:
deb http://ftp.it.debian.org/debian bullseye main contrib

deb http://ftp.it.debian.org/debian bullseye-updates main contrib

# security updates
deb http://security.debian.org bullseye-security main contrib

/etc/apt/sources.list.d/pve-enterprise.list

Code:
deb https://enterprise.proxmox.com/debian/pve bullseye pve-enterprise

They seem to me the default ones, but it gives me the error shown above.
 
These are my files.

/etc/apt/sources.list
Code:
deb http://ftp.it.debian.org/debian bullseye main contrib

deb http://ftp.it.debian.org/debian bullseye-updates main contrib

# security updates
deb http://security.debian.org bullseye-security main contrib

/etc/apt/sources.list.d/pve-enterprise.list

Code:
deb https://enterprise.proxmox.com/debian/pve bullseye pve-enterprise

They seem to me the default ones, but it gives me the error shown above.
read at the link @t.lamprecht provided.

Without a subscription, you need to disable the default enterprise repository and enable the no-subscription repository. this can be done with the webgui or via CLI.

https://pve.proxmox.com/wiki/Package_Repositories
 
  • Like
Reactions: vaschthestampede
Installed the and tried out the the v5.15 kernel recently (pve-kernel-5.15.19-2-pve and pve-headers-5.15.19-2-pve). After installing the v5.15 kernel PCI passthrough stopped working even though on 5.13 it's working perfectly.

My config: PVE v7.1-10, Ryzen 3600, Asus B450-F (BIOS version 4602), 32GB DDR4 3200MHz, Micron 5300 Pro (Boot), Samsung PM983 (VM & LXC)
 
Last edited:
With 5.15 nested virtualization is not stable.
Randomly VM freezes and reported CPU usage rises to 100%.

I'm on AMD EPYC 7402P.
 
Last edited:
With 5.15 nested virtualization is not stable.
Randomly VM freezes and reported CPU usage rises to 100%.
Do you have the latest micro code updates and bios/firmware updates installed? What's the level 1 hyper visor OS?
The VM config would be interesting too. Did you checked both, the level 0 and level 1 hyper visors log for errors?

FWIW, nested PVEs work here well under the 5.15 kernel on a broad range of HW (Intel and AMD).
 
This kernel does not work with a Dell PowerEdge 2950 III with Dual Xeon E5420.

The same type of modification as on kernel 5.13 seems required:
https://forum.proxmox.com/threads/updated-to-7-1-and-having-boot-issues.99908/

To be able to boot, I had to disable acpi.
Yeah it seems that did not get magically fixed, and support for HW that's more than a decade old (>= 15 year in this case) tends to have a higher chance on breaking over time. FWIW, the 5.13 kernel worked on a platform with a Intel Q6600 (~ same era) here, did not checked the 5.15 yet as that machine gets only powered on for specific tests (eats way to much power otherwise), so this still seems dell specific, so I'd recommend to contact their support (if still existing for that product) or keep ACPI disabled.
 
Do you have the latest micro code updates and bios/firmware updates installed?
I can try to update the firmware.
Up until now it hasn't given me any problems so I've never updated.

What's the level 1 hyper visor OS?
Hyper-v on Windows 11 Pro.
I tried docker, Bluestack, Virtualbox and Hyper-v it self on four different virtual machines.

All of them work properly the first day then they start to freeze.

The VM config would be interesting too.
I deleted everything, I can try to recreate it next week.

Did you checked both, the level 0 and level 1 hyper visors log for errors?
I didn't find anything in the windows logs.
Where can I find the proxmox logs? In syslogs it gave nothing except that the machines were not responding to the internal ping.
 
Please update the Microcode. I have also any PSOD on ESXi7U3 with old Bios and Epyc Rome and Milan. After updates all runs fine.
 
Kernel version 5.15.27-1-pve introduced a bug for network cards on the Atlantic chipset.

Code:
Mar 19 16:31:12 pve-1 kernel: ================================================================================
Mar 19 16:31:12 pve-1 kernel: UBSAN: array-index-out-of-bounds in drivers/net/ethernet/aquantia/atlantic/aq_nic.c:484:48
Mar 19 16:31:12 pve-1 kernel: index 8 is out of range for type 'aq_vec_s *[8]'
Mar 19 16:31:12 pve-1 kernel: CPU: 3 PID: 1524 Comm: ip Tainted: P          IO      5.15.27-1-pve #1
Mar 19 16:31:12 pve-1 kernel: Hardware name: Intel(R) Client Systems NUC6i7KYK/NUC6i7KYB, BIOS KYSKLi70.86A.0074.2021.1029.0102 10/29/2021
Mar 19 16:31:12 pve-1 kernel: Call Trace:
Mar 19 16:31:12 pve-1 kernel:  <TASK>
Mar 19 16:31:12 pve-1 kernel:  dump_stack_lvl+0x4a/0x5f
Mar 19 16:31:12 pve-1 kernel:  dump_stack+0x10/0x12
Mar 19 16:31:12 pve-1 kernel:  ubsan_epilogue+0x9/0x45
Mar 19 16:31:12 pve-1 kernel:  __ubsan_handle_out_of_bounds.cold+0x44/0x49
Mar 19 16:31:12 pve-1 kernel:  ? aq_vec_start+0x94/0xb0 [atlantic]
Mar 19 16:31:12 pve-1 kernel:  aq_nic_start+0x3af/0x3d0 [atlantic]
Mar 19 16:31:12 pve-1 kernel:  aq_ndev_open+0x49/0x70 [atlantic]
Mar 19 16:31:12 pve-1 kernel:  __dev_open+0xf3/0x1c0
Mar 19 16:31:12 pve-1 kernel:  __dev_change_flags+0x1a3/0x220
Mar 19 16:31:12 pve-1 kernel:  dev_change_flags+0x26/0x60
Mar 19 16:31:12 pve-1 kernel:  do_setlink+0x2a3/0x1090
Mar 19 16:31:12 pve-1 kernel:  ? __nla_validate_parse+0x5b/0xca0
Mar 19 16:31:12 pve-1 kernel:  ? nla_put_ifalias+0x38/0xa0
Mar 19 16:31:12 pve-1 kernel:  ? kernel_init_free_pages.part.0+0x4a/0x60
Mar 19 16:31:12 pve-1 kernel:  ? get_page_from_freelist+0xc1a/0x1130
Mar 19 16:31:12 pve-1 kernel:  ? __nla_reserve+0x45/0x60
Mar 19 16:31:12 pve-1 kernel:  __rtnl_newlink+0x618/0xa20
Mar 19 16:31:12 pve-1 kernel:  ? netlink_deliver_tap+0x3d/0x220
Mar 19 16:31:12 pve-1 kernel:  ? skb_queue_tail+0x48/0x50
Mar 19 16:31:12 pve-1 kernel:  ? sock_def_readable+0x4b/0x80
Mar 19 16:31:12 pve-1 kernel:  ? netlink_unicast+0x2f8/0x330
Mar 19 16:31:12 pve-1 kernel:  ? rtnl_getlink+0x3a6/0x430
Mar 19 16:31:12 pve-1 kernel:  ? kmem_cache_alloc_trace+0x19e/0x2e0
Mar 19 16:31:12 pve-1 kernel:  rtnl_newlink+0x49/0x70
Mar 19 16:31:12 pve-1 kernel:  rtnetlink_rcv_msg+0x160/0x410
Mar 19 16:31:12 pve-1 kernel:  ? skb_free_head+0x67/0x80
Mar 19 16:31:12 pve-1 kernel:  ? rtnl_calcit.isra.0+0x130/0x130
Mar 19 16:31:12 pve-1 kernel:  netlink_rcv_skb+0x55/0x100
Mar 19 16:31:12 pve-1 kernel:  rtnetlink_rcv+0x15/0x20
Mar 19 16:31:12 pve-1 kernel:  netlink_unicast+0x221/0x330
Mar 19 16:31:12 pve-1 kernel:  netlink_sendmsg+0x23f/0x4a0
Mar 19 16:31:12 pve-1 kernel:  sock_sendmsg+0x65/0x70
Mar 19 16:31:12 pve-1 kernel:  ____sys_sendmsg+0x257/0x2a0
Mar 19 16:31:12 pve-1 kernel:  ? import_iovec+0x31/0x40
Mar 19 16:31:12 pve-1 kernel:  ? sendmsg_copy_msghdr+0x7e/0xa0
Mar 19 16:31:12 pve-1 kernel:  ___sys_sendmsg+0x82/0xc0
Mar 19 16:31:12 pve-1 kernel:  ? kmem_cache_free+0x24a/0x290
Mar 19 16:31:12 pve-1 kernel:  ? dentry_free+0x37/0x70
Mar 19 16:31:12 pve-1 kernel:  ? kmem_cache_free+0x24a/0x290
Mar 19 16:31:12 pve-1 kernel:  ? call_rcu+0xa8/0x280
Mar 19 16:31:12 pve-1 kernel:  ? __fput+0x123/0x260
Mar 19 16:31:12 pve-1 kernel:  __sys_sendmsg+0x62/0xb0
Mar 19 16:31:12 pve-1 kernel:  __x64_sys_sendmsg+0x1f/0x30
Mar 19 16:31:12 pve-1 kernel:  do_syscall_64+0x5c/0xc0
Mar 19 16:31:12 pve-1 kernel:  ? irqentry_exit+0x19/0x30
Mar 19 16:31:12 pve-1 kernel:  ? exc_page_fault+0x89/0x160
Mar 19 16:31:12 pve-1 kernel:  ? asm_exc_page_fault+0x8/0x30
Mar 19 16:31:12 pve-1 kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xae
Mar 19 16:31:12 pve-1 kernel: RIP: 0033:0x7f14046772c3
Mar 19 16:31:12 pve-1 kernel: Code: 64 89 02 48 c7 c0 ff ff ff ff eb b7 66 2e 0f 1f 84 00 00 00 00 00 90 64 8b 04 25 18 00 00 00 85 c0 75 14 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 55 c3 0f 1f 40 00 48 83 ec 28 89 54 24 1c 48
Mar 19 16:31:12 pve-1 kernel: RSP: 002b:00007ffed8836428 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
Mar 19 16:31:12 pve-1 kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f14046772c3
Mar 19 16:31:12 pve-1 kernel: RDX: 0000000000000000 RSI: 00007ffed8836490 RDI: 0000000000000003
Mar 19 16:31:12 pve-1 kernel: RBP: 000000006235f740 R08: 0000000000000001 R09: 00007f1404736be0
Mar 19 16:31:12 pve-1 kernel: R10: 0000000000000076 R11: 0000000000000246 R12: 0000000000000001
Mar 19 16:31:12 pve-1 kernel: R13: 00007ffed8836560 R14: 0000000000000000 R15: 000055b3e7483020
Mar 19 16:31:12 pve-1 kernel:  </TASK>
Mar 19 16:31:12 pve-1 kernel: ================================================================================
Mar 19 16:31:12 pve-1 kernel: ================================================================================
Mar 19 16:31:12 pve-1 kernel: UBSAN: array-index-out-of-bounds in drivers/net/ethernet/aquantia/atlantic/aq_nic.c:515:49
Mar 19 16:31:12 pve-1 kernel: index 8 is out of range for type 'aq_vec_s *[8]'
Mar 19 16:31:12 pve-1 kernel: CPU: 3 PID: 1524 Comm: ip Tainted: P          IO      5.15.27-1-pve #1
Mar 19 16:31:12 pve-1 kernel: Hardware name: Intel(R) Client Systems NUC6i7KYK/NUC6i7KYB, BIOS KYSKLi70.86A.0074.2021.1029.0102 10/29/2021
Mar 19 16:31:12 pve-1 kernel: Call Trace:
Mar 19 16:31:12 pve-1 kernel:  <TASK>
Mar 19 16:31:12 pve-1 kernel:  dump_stack_lvl+0x4a/0x5f
Mar 19 16:31:12 pve-1 kernel:  dump_stack+0x10/0x12
Mar 19 16:31:12 pve-1 kernel:  ubsan_epilogue+0x9/0x45
Mar 19 16:31:12 pve-1 kernel:  __ubsan_handle_out_of_bounds.cold+0x44/0x49
Mar 19 16:31:12 pve-1 kernel:  ? aq_vec_ring_free+0x80/0x80 [atlantic]
Mar 19 16:31:12 pve-1 kernel:  aq_nic_start+0x3c3/0x3d0 [atlantic]
Mar 19 16:31:12 pve-1 kernel:  aq_ndev_open+0x49/0x70 [atlantic]
Mar 19 16:31:12 pve-1 kernel:  __dev_open+0xf3/0x1c0
Mar 19 16:31:12 pve-1 kernel:  __dev_change_flags+0x1a3/0x220
Mar 19 16:31:12 pve-1 kernel:  dev_change_flags+0x26/0x60
Mar 19 16:31:12 pve-1 kernel:  do_setlink+0x2a3/0x1090
Mar 19 16:31:12 pve-1 kernel:  ? __nla_validate_parse+0x5b/0xca0
Mar 19 16:31:12 pve-1 kernel:  ? nla_put_ifalias+0x38/0xa0
Mar 19 16:31:12 pve-1 kernel:  ? kernel_init_free_pages.part.0+0x4a/0x60
Mar 19 16:31:12 pve-1 kernel:  ? get_page_from_freelist+0xc1a/0x1130
Mar 19 16:31:12 pve-1 kernel:  ? __nla_reserve+0x45/0x60
Mar 19 16:31:12 pve-1 kernel:  __rtnl_newlink+0x618/0xa20
Mar 19 16:31:12 pve-1 kernel:  ? netlink_deliver_tap+0x3d/0x220
Mar 19 16:31:12 pve-1 kernel:  ? skb_queue_tail+0x48/0x50
Mar 19 16:31:12 pve-1 kernel:  ? sock_def_readable+0x4b/0x80
Mar 19 16:31:12 pve-1 kernel:  ? netlink_unicast+0x2f8/0x330
Mar 19 16:31:12 pve-1 kernel:  ? rtnl_getlink+0x3a6/0x430
Mar 19 16:31:12 pve-1 kernel:  ? kmem_cache_alloc_trace+0x19e/0x2e0
Mar 19 16:31:12 pve-1 kernel:  rtnl_newlink+0x49/0x70
Mar 19 16:31:12 pve-1 kernel:  rtnetlink_rcv_msg+0x160/0x410
Mar 19 16:31:12 pve-1 kernel:  ? skb_free_head+0x67/0x80
Mar 19 16:31:12 pve-1 kernel:  ? rtnl_calcit.isra.0+0x130/0x130
Mar 19 16:31:12 pve-1 kernel:  netlink_rcv_skb+0x55/0x100
Mar 19 16:31:12 pve-1 kernel:  rtnetlink_rcv+0x15/0x20
Mar 19 16:31:12 pve-1 kernel:  netlink_unicast+0x221/0x330
Mar 19 16:31:12 pve-1 kernel:  netlink_sendmsg+0x23f/0x4a0
Mar 19 16:31:12 pve-1 kernel:  sock_sendmsg+0x65/0x70
Mar 19 16:31:12 pve-1 kernel:  ____sys_sendmsg+0x257/0x2a0
Mar 19 16:31:12 pve-1 kernel:  ? import_iovec+0x31/0x40
Mar 19 16:31:12 pve-1 kernel:  ? sendmsg_copy_msghdr+0x7e/0xa0
Mar 19 16:31:12 pve-1 kernel:  ___sys_sendmsg+0x82/0xc0
Mar 19 16:31:12 pve-1 kernel:  ? kmem_cache_free+0x24a/0x290
Mar 19 16:31:12 pve-1 kernel:  ? dentry_free+0x37/0x70
Mar 19 16:31:12 pve-1 kernel:  ? kmem_cache_free+0x24a/0x290
Mar 19 16:31:12 pve-1 kernel:  ? call_rcu+0xa8/0x280
Mar 19 16:31:12 pve-1 kernel:  ? __fput+0x123/0x260
Mar 19 16:31:12 pve-1 kernel:  __sys_sendmsg+0x62/0xb0
Mar 19 16:31:12 pve-1 kernel:  __x64_sys_sendmsg+0x1f/0x30
Mar 19 16:31:12 pve-1 kernel:  do_syscall_64+0x5c/0xc0
Mar 19 16:31:12 pve-1 kernel:  ? irqentry_exit+0x19/0x30
Mar 19 16:31:12 pve-1 kernel:  ? exc_page_fault+0x89/0x160
Mar 19 16:31:12 pve-1 kernel:  ? asm_exc_page_fault+0x8/0x30
Mar 19 16:31:12 pve-1 kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xae
Mar 19 16:31:12 pve-1 kernel: RIP: 0033:0x7f14046772c3
Mar 19 16:31:12 pve-1 kernel: Code: 64 89 02 48 c7 c0 ff ff ff ff eb b7 66 2e 0f 1f 84 00 00 00 00 00 90 64 8b 04 25 18 00 00 00 85 c0 75 14 b8 2e 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 55 c3 0f 1f 40 00 48 83 ec 28 89 54 24 1c 48
Mar 19 16:31:12 pve-1 kernel: RSP: 002b:00007ffed8836428 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
Mar 19 16:31:12 pve-1 kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f14046772c3
Mar 19 16:31:12 pve-1 kernel: RDX: 0000000000000000 RSI: 00007ffed8836490 RDI: 0000000000000003
Mar 19 16:31:12 pve-1 kernel: RBP: 000000006235f740 R08: 0000000000000001 R09: 00007f1404736be0
Mar 19 16:31:12 pve-1 kernel: R10: 0000000000000076 R11: 0000000000000246 R12: 0000000000000001
Mar 19 16:31:12 pve-1 kernel: R13: 00007ffed8836560 R14: 0000000000000000 R15: 000055b3e7483020
Mar 19 16:31:12 pve-1 kernel:  </TASK>
Mar 19 16:31:12 pve-1 kernel: ================================================================================

I found the following thread about this error: https://lkml.org/lkml/2022/3/4/6
Version 5.13 of the kernel did not have this problem.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!