Opt-in Linux 6.14 Kernel for Proxmox VE 8 available on test & no-subscription

Whatever this kernel release is about, it is busted beyond belief.
I get CPU usage up, with IO delay and server load up the roof.
View attachment 86493
Cannot start linux VMs (TASK ERROR: timeout waiting on systemd) and cannot shutdown neither VMs not CTs through PVE interface.
After pinning the previous kernel, I couldn't soft reboot the server. Had to cold reset.


HP DL380p Gen9
This sounds very HW and/or setup specifics, we saw no such thing on the HW in our testlabs, even a tiny decrease of base level IO load on one server FWIW.

Anything in the system log (journal)? More details like file system and so on would be good to have too.
That server series is also from 2014 IIRC, with more than ten years the problems with running latest kernels might start to increase again; not saying it has to be in your case, but such old systems are rarer to find in test labs. If it was a specific regression it might be still fixable, but you should in any case ensure you run the latest BIOS/Firmware available for your system to rule out such generic stuff.
 
I just checked and dkms is a command not found so that's not going to help. Do you think I can remove whatever is causing the system to think I need the headers now and if so, what do I have do?
That's a tiny bit odd, does /usr/sbin/dkms work, just to ensure that it's not just because sbin isn't in your PATH?
Is ls /var/lib/dkms turning something up?

But it might be indeed that there is just some config left from dkms, but full support got removed already from your system.
As mentioned, you can always reboot the system in the previous kernel (i.e., the one you run now), as we keep a few of the latest kernels plus the currently booted kernel packages installed by default, so if you have a small maintenance window to handle the ordinary reboot, an extra reboot to get back to the previous kernel, if really needed, should not take that much longer.
 
It's a standard kernel update, there are thousands of changes from the previous stable release up to 6.14.5 mentioned here, far to many to list them all and still be useful to anybody.
What are you interested specifically? The bpo12 part is just referring to backport for the twelfth Debian release (bookworm) on which PVE 8 is basing off. The reason for it appearing now is that we're in preparation for PVE 9 since a while, and now want to start to ensure there can be a correct upgrade path, thus build the 6.14 kernel with a versioning scheme that ensures that.
You answered it for me, thanks! I was just wondering about the bpo12 tag.
 
This sounds very HW and/or setup specifics, we saw no such thing on the HW in our testlabs, even a tiny decrease of base level IO load on one server FWIW.

Anything in the system log (journal)? More details like file system and so on would be good to have too.
That server series is also from 2014 IIRC, with more than ten years the problems with running latest kernels might start to increase again; not saying it has to be in your case, but such old systems are rarer to find in test labs. If it was a specific regression it might be still fixable, but you should in any case ensure you run the latest BIOS/Firmware available for your system to rule out such generic stuff.
Will the attached logs be useful?
May 27 17:17:00 is when I logged in into PVE to investigate the problems with one of the VMs, that wasn't running correctly and also when I wasn't able to shut it down properly and had to stop it. Also couldn't start it back on, till I reverted the PVE kernel.

Code:
May 27 17:17:13 SVBM-001 pvedaemon[3383]: <root@pam> starting task UPID:SVBM-001:00018339:001646E4:6835C969:qmreboot:500:root@pam:
May 27 17:17:13 SVBM-001 pvedaemon[99129]: requesting reboot of VM 500: UPID:SVBM-001:00018339:001646E4:6835C969:qmreboot:500:root@pam:
May 27 17:18:40 SVBM-001 pvedaemon[3383]: <root@pam> starting task UPID:SVBM-001:00018578:001668BE:6835C9C0:qmstop:500:root@pam:
May 27 17:18:40 SVBM-001 pvedaemon[99704]: stop VM 500: UPID:SVBM-001:00018578:001668BE:6835C9C0:qmstop:500:root@pam:
May 27 17:18:50 SVBM-001 pvedaemon[99704]: can't lock file '/var/lock/qemu-server/lock-500.conf' - got timeout
May 27 17:18:50 SVBM-001 pvedaemon[3383]: <root@pam> end task UPID:SVBM-001:00018578:001668BE:6835C9C0:qmstop:500:root@pam: can't lock file '/var/lock/qemu-server/lock-500.conf' - got timeout
May 27 17:18:51 SVBM-001 pvedaemon[99129]: closing with read buffer at /usr/share/perl5/IO/Multiplex.pm line 927.
May 27 17:18:51 SVBM-001 pvedaemon[99129]: VM 500 qmp command failed - received interrupt
May 27 17:18:51 SVBM-001 pvedaemon[99129]: VM quit/powerdown failed
May 27 17:18:51 SVBM-001 pvedaemon[3383]: <root@pam> end task UPID:SVBM-001:00018339:001646E4:6835C969:qmreboot:500:root@pam: VM quit/powerdown failed
May 27 17:19:02 SVBM-001 pvedaemon[99839]: stop VM 500: UPID:SVBM-001:000185FF:00167138:6835C9D6:qmstop:500:root@pam:
May 27 17:19:02 SVBM-001 pvedaemon[3382]: <root@pam> starting task UPID:SVBM-001:000185FF:00167138:6835C9D6:qmstop:500:root@pam:
May 27 17:19:07 SVBM-001 pvedaemon[99839]: VM 500 qmp command failed - VM 500 qmp command 'quit' failed - got timeout
May 27 17:19:07 SVBM-001 pvedaemon[99839]: VM quit/powerdown failed - terminating now with SIGTERM
May 27 17:19:11 SVBM-001 pvedaemon[3384]: VM 500 qmp command failed - VM 500 qmp command 'query-proxmox-support' failed - unable to connect to VM 500 qmp socket - timeout after 51 retries
May 27 17:19:13 SVBM-001 pvestatd[3367]: VM 500 qmp command failed - VM 500 qmp command 'query-proxmox-support' failed - unable to connect to VM 500 qmp socket - timeout after 51 retries
May 27 17:19:13 SVBM-001 pvestatd[3367]: status update time (8.154 seconds)
May 27 17:19:13 SVBM-001 sshd[99025]: Received disconnect from 172.24.95.60 port 48592:11: disconnected by user
May 27 17:19:13 SVBM-001 sshd[99025]: Disconnected from user root 172.24.95.60 port 48592
May 27 17:19:13 SVBM-001 sshd[99025]: pam_unix(sshd:session): session closed for user root
May 27 17:19:13 SVBM-001 systemd[1]: session-9.scope: Deactivated successfully.
May 27 17:19:13 SVBM-001 systemd-logind[2887]: Session 9 logged out. Waiting for processes to exit.
May 27 17:19:13 SVBM-001 systemd-logind[2887]: Removed session 9.
May 27 17:19:13 SVBM-001 pmxcfs[3221]: [status] notice: received log
May 27 17:19:17 SVBM-001 pvedaemon[3382]: VM 500 qmp command failed - VM 500 qmp command 'guest-ping' failed - got timeout
May 27 17:19:17 SVBM-001 pvedaemon[99839]: VM still running - terminating now with SIGKILL
May 27 17:19:18 SVBM-001 pvedaemon[3384]: VM 500 qmp command failed - VM 500 not running
May 27 17:19:18 SVBM-001 pmxcfs[3221]: [status] notice: received log
May 27 17:19:18 SVBM-001 pvedaemon[3382]: <root@pam> end task UPID:SVBM-001:000185FF:00167138:6835C9D6:qmstop:500:root@pam: OK
May 27 17:19:18 SVBM-001 pvestatd[3367]: VM 500 qmp command failed - VM 500 not running
May 27 17:19:18 SVBM-001 sshd[99947]: Accepted publickey for root from 172.24.95.60 port 50072 ssh2: RSA SHA256:R0KDNUTJvc4GUNjcv0xKymLaKrrd5G20X7Ia91CwR0M
May 27 17:19:18 SVBM-001 sshd[99947]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0)
May 27 17:19:18 SVBM-001 systemd-logind[2887]: New session 12 of user root.
May 27 17:19:18 SVBM-001 systemd[1]: Started session-12.scope - Session 12 of User root.
May 27 17:19:18 SVBM-001 sshd[99947]: pam_env(sshd:session): deprecated reading of user environment enabled
May 27 17:19:19 SVBM-001 qm[99960]: VM 500 qmp command failed - VM 500 not running
May 27 17:19:19 SVBM-001 sshd[99947]: Received disconnect from 172.24.95.60 port 50072:11: disconnected by user
May 27 17:19:19 SVBM-001 sshd[99947]: Disconnected from user root 172.24.95.60 port 50072
May 27 17:19:19 SVBM-001 sshd[99947]: pam_unix(sshd:session): session closed for user root
May 27 17:19:19 SVBM-001 systemd-logind[2887]: Session 12 logged out. Waiting for processes to exit.
May 27 17:19:19 SVBM-001 systemd[1]: session-12.scope: Deactivated successfully.
May 27 17:19:19 SVBM-001 systemd-logind[2887]: Removed session 12.
May 27 17:19:19 SVBM-001 pmxcfs[3221]: [status] notice: received log
May 27 17:19:27 SVBM-001 pmxcfs[3221]: [status] notice: received log
May 27 17:19:29 SVBM-001 pvedaemon[100026]: start VM 500: UPID:SVBM-001:000186BA:00167BCE:6835C9F1:qmstart:500:root@pam:
May 27 17:19:29 SVBM-001 pvedaemon[3384]: <root@pam> starting task UPID:SVBM-001:000186BA:00167BCE:6835C9F1:qmstart:500:root@pam:
May 27 17:19:29 SVBM-001 systemd[1]: 500.scope: Deactivated successfully.
May 27 17:19:29 SVBM-001 systemd[1]: Stopped 500.scope.
May 27 17:19:29 SVBM-001 systemd[1]: 500.scope: Consumed 4min 52.378s CPU time.
May 27 17:19:29 SVBM-001 systemd[1]: Stopping user@0.service - User Manager for UID 0...
May 27 17:19:29 SVBM-001 systemd[99028]: Activating special unit exit.target...
May 27 17:19:29 SVBM-001 systemd[99028]: Stopped target default.target - Main User Target.
May 27 17:19:29 SVBM-001 systemd[99028]: Stopped target basic.target - Basic System.
May 27 17:19:29 SVBM-001 systemd[99028]: Stopped target paths.target - Paths.
May 27 17:19:29 SVBM-001 systemd[99028]: Stopped target sockets.target - Sockets.
May 27 17:19:29 SVBM-001 systemd[99028]: Stopped target timers.target - Timers.
May 27 17:19:29 SVBM-001 systemd[99028]: Closed dirmngr.socket - GnuPG network certificate management daemon.
May 27 17:19:29 SVBM-001 systemd[99028]: Closed gpg-agent-browser.socket - GnuPG cryptographic agent and passphrase cache (access for web browsers).
May 27 17:19:29 SVBM-001 systemd[99028]: Closed gpg-agent-extra.socket - GnuPG cryptographic agent and passphrase cache (restricted).
May 27 17:19:29 SVBM-001 systemd[99028]: Closed gpg-agent-ssh.socket - GnuPG cryptographic agent (ssh-agent emulation).
May 27 17:19:29 SVBM-001 systemd[99028]: Closed gpg-agent.socket - GnuPG cryptographic agent and passphrase cache.
May 27 17:19:29 SVBM-001 systemd[99028]: Removed slice app.slice - User Application Slice.
May 27 17:19:29 SVBM-001 systemd[99028]: Reached target shutdown.target - Shutdown.
May 27 17:19:29 SVBM-001 systemd[99028]: Finished systemd-exit.service - Exit the Session.
May 27 17:19:29 SVBM-001 systemd[99028]: Reached target exit.target - Exit the Session.
May 27 17:19:29 SVBM-001 systemd[1]: user@0.service: Deactivated successfully.
May 27 17:19:29 SVBM-001 systemd[1]: Stopped user@0.service - User Manager for UID 0.
May 27 17:19:29 SVBM-001 systemd[1]: Stopping user-runtime-dir@0.service - User Runtime Directory /run/user/0...
May 27 17:19:29 SVBM-001 systemd[1]: run-user-0.mount: Deactivated successfully.
May 27 17:19:29 SVBM-001 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
May 27 17:19:29 SVBM-001 systemd[1]: Stopped user-runtime-dir@0.service - User Runtime Directory /run/user/0.
May 27 17:19:29 SVBM-001 systemd[1]: Removed slice user-0.slice - User Slice of UID 0.
May 27 17:19:29 SVBM-001 systemd[1]: user-0.slice: Consumed 1.858s CPU time.
May 27 17:19:45 SVBM-001 pvedaemon[100145]: start VM 500: UPID:SVBM-001:00018731:001681F9:6835CA01:qmstart:500:root@pam:
May 27 17:19:45 SVBM-001 pvedaemon[3383]: <root@pam> starting task UPID:SVBM-001:00018731:001681F9:6835CA01:qmstart:500:root@pam:
May 27 17:19:49 SVBM-001 pvedaemon[100026]: timeout waiting on systemd
May 27 17:19:49 SVBM-001 pvedaemon[3384]: <root@pam> end task UPID:SVBM-001:000186BA:00167BCE:6835C9F1:qmstart:500:root@pam: timeout waiting on systemd
May 27 17:20:09 SVBM-001 pvedaemon[100145]: timeout waiting on systemd
May 27 17:20:09 SVBM-001 pvedaemon[3383]: <root@pam> end task UPID:SVBM-001:00018731:001681F9:6835CA01:qmstart:500:root@pam: timeout waiting on systemd
May 27 17:20:14 SVBM-001 pvedaemon[3384]: <root@pam> starting task UPID:SVBM-001:000187E6:00168D58:6835CA1E:qmstart:500:root@pam:
May 27 17:20:14 SVBM-001 pvedaemon[100326]: start VM 500: UPID:SVBM-001:000187E6:00168D58:6835CA1E:qmstart:500:root@pam:
May 27 17:20:34 SVBM-001 pvedaemon[100326]: timeout waiting on systemd
May 27 17:20:34 SVBM-001 pvedaemon[3384]: <root@pam> end task UPID:SVBM-001:000187E6:00168D58:6835CA1E:qmstart:500:root@pam: timeout waiting on systemd
 

Attachments

Last edited:
Will the attached logs be useful?
It's mostly showing the effects of an overloaded setup, but not really any pointer of the cause, can you please attach the full journal as compressed text file, e.g. see my post here for what I mean:

 
How to install:
  1. Ensure that either the pve-no-subscription or pvetest repository is set up correctly.
    You can do so via CLI text-editor or using the web UI under Node -> Repositories.
  2. Open a shell as root, e.g. through SSH or using the integrated shell on the web UI.
  3. apt update
  4. apt install proxmox-kernel-6.14
  5. reboot
Future updates to the 6.14 kernel will now be installed automatically when upgrading a node.

I read about all the enhancements and improvements made to virtualization technologies in Linux kernel 6.12 and higher, and I couldn't wait...

Just updated my 6.8 node to 6.14 using these instructions and everything's running great so far!

My Proxmox node:

AMD Ryzen 7 5700X on ASRock X570 Phantom Gaming 4 motherboard, BIOS 5.63, with 32GB RAM at 3600MT/s
  • 3x LXCs w/GUI passthrough: Plex and Jellyfin AND OpenWebUI running side-by-side in containers, on a single RTX 3050 6GB
  • VM: OpenMediaVault NAS
  • VM: HomeAssistant OS
  • LXC: Technitium DNS server
  • LXC: qBittorrent with VPN
  • LXC: Caddy
  • LXC: ntfy
  • LXC: RustDesk
  • LXC: Wireguard with WGDashboard
  • LXC: OpenWebUI with Ollama for local AI processing
Something that has nagged me (unrelated to this update) is shutdown time, I know it has to do with the NAS (OMV). Anyone else have this problem? I have NFS configured on the Proxmox node and pass my media shares to Plex and Jellyfin via their respective container config files. Is there a better way?
 
Last edited:
It's mostly showing the effects of an overloaded setup, but not really any pointer of the cause, can you please attach the full journal as compressed text file, e.g. see my post here for what I mean:

Here is attached log, using journalctl --no-hostname -o short-precise --since=2025-05-27 | zstd >journal.log.zst

I see no difference between the previously attached logs and this log, though.
 

Attachments

Last edited:
That's a tiny bit odd, does /usr/sbin/dkms work, just to ensure that it's not just because sbin isn't in your PATH?
Is ls /var/lib/dkms turning something up?

But it might be indeed that there is just some config left from dkms, but full support got removed already from your system.
As mentioned, you can always reboot the system in the previous kernel (i.e., the one you run now), as we keep a few of the latest kernels plus the currently booted kernel packages installed by default, so if you have a small maintenance window to handle the ordinary reboot, an extra reboot to get back to the previous kernel, if really needed, should not take that much longer.

Code:
root@abe:~# /usr/sbin/dkms
-bash: /usr/sbin/dkms: No such file or directory
root@abe:~#
root@abe:~#
root@abe:~#
root@abe:~# ls /var/lib/dkms
mok.key  mok.pub
root@abe:~#
root@abe:~# apt list dkms
Listing... Done
dkms/stable,now 3.0.10-8+deb12u1 all [residual-config]

Should I try apt dkms purge? Then I will see what happens at the next update.
 
I see no difference between the previously attached logs and this log, though.

Ah sorry, I only noticed the log parts from those you posted directly inline and due to that I missed that you attached a zip file already in the previous post.

Anyhow, these below indeed point an actual problem with the combination of that kernel version and your hardware:

Code:
May 27 13:18:01.283514 kernel: INFO: task kworker/u226:0:358 blocked for more than 122 seconds.
May 27 13:18:01.283672 kernel:       Tainted: P           O       6.14.5-1-bpo12-pve #1
May 27 13:18:01.283690 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 27 13:18:01.283702 kernel: task:kworker/u226:0  state:D stack:0     pid:358   tgid:358   ppid:2      task_flags:0x4248060 flags:0x00004000
May 27 13:18:01.284686 kernel: Workqueue: writeback wb_workfn (flush-7:1)
May 27 13:18:01.284737 kernel: Call Trace:
May 27 13:18:01.284753 kernel:  <TASK>
May 27 13:18:01.284897 kernel:  __schedule+0x495/0x13f0
May 27 13:18:01.285027 kernel:  ? __pfx_wbt_inflight_cb+0x10/0x10
May 27 13:18:01.285578 kernel:  ? __pfx_wbt_inflight_cb+0x10/0x10
May 27 13:18:01.285616 kernel:  schedule+0x29/0x130
May 27 13:18:01.286431 kernel:  io_schedule+0x4c/0x80
May 27 13:18:01.286469 kernel:  rq_qos_wait+0xbb/0x160
May 27 13:18:01.286484 kernel:  ? __pfx_wbt_cleanup_cb+0x10/0x10
May 27 13:18:01.287530 kernel:  ? __pfx_rq_qos_wake_function+0x10/0x10
May 27 13:18:01.287569 kernel:  ? __pfx_wbt_inflight_cb+0x10/0x10
May 27 13:18:01.287582 kernel:  wbt_wait+0xb5/0x130
May 27 13:18:01.287705 kernel:  __rq_qos_throttle+0x25/0x40
May 27 13:18:01.288557 kernel:  blk_mq_submit_bio+0x4d9/0x820
May 27 13:18:01.288594 kernel:  __submit_bio+0x75/0x290
May 27 13:18:01.288609 kernel:  submit_bio_noacct_nocheck+0x2ea/0x3b0
May 27 13:18:01.288619 kernel:  submit_bio_noacct+0x1a0/0x5b0
May 27 13:18:01.289533 kernel:  submit_bio+0xb1/0x110
May 27 13:18:01.289571 kernel:  ext4_io_submit+0x24/0x50
May 27 13:18:01.289586 kernel:  ext4_do_writepages+0x376/0xe10
May 27 13:18:01.289597 kernel:  ext4_writepages+0xbb/0x190
May 27 13:18:01.290583 kernel:  ? ext4_writepages+0xbb/0x190
May 27 13:18:01.290620 kernel:  do_writepages+0x83/0x290
May 27 13:18:01.290636 kernel:  ? sched_clock_noinstr+0x9/0x10
May 27 13:18:01.290645 kernel:  ? sched_clock+0x10/0x30
May 27 13:18:01.291499 kernel:  __writeback_single_inode+0x44/0x350
May 27 13:18:01.291537 kernel:  writeback_sb_inodes+0x252/0x540
May 27 13:18:01.291552 kernel:  __writeback_inodes_wb+0x54/0x100
May 27 13:18:01.291562 kernel:  ? queue_io+0x113/0x120
May 27 13:18:01.292542 kernel:  wb_writeback+0x1ad/0x320
May 27 13:18:01.292580 kernel:  ? get_nr_inodes+0x41/0x70
May 27 13:18:01.292594 kernel:  wb_workfn+0x351/0x400
May 27 13:18:01.292606 kernel:  process_one_work+0x178/0x3b0
May 27 13:18:01.292617 kernel:  worker_thread+0x2b8/0x3e0
May 27 13:18:01.293456 kernel:  ? __pfx_worker_thread+0x10/0x10
May 27 13:18:01.293494 kernel:  kthread+0xfb/0x230
May 27 13:18:01.293507 kernel:  ? __pfx_kthread+0x10/0x10
May 27 13:18:01.293519 kernel:  ret_from_fork+0x44/0x70
May 27 13:18:01.293529 kernel:  ? __pfx_kthread+0x10/0x10
May 27 13:18:01.294556 kernel:  ret_from_fork_asm+0x1a/0x30
May 27 13:18:01.294594 kernel:  </TASK>
May 27 13:18:01.294608 kernel: INFO: task kworker/u225:8:611 blocked for more than 122 seconds.
May 27 13:18:01.294634 kernel:       Tainted: P           O       6.14.5-1-bpo12-pve #1
May 27 13:18:01.294648 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 27 13:18:01.294802 kernel: task:kworker/u225:8  state:D stack:0     pid:611   tgid:611   ppid:2      task_flags:0x4388060 flags:0x00004000
May 27 13:18:01.294821 kernel: Workqueue: loop0 loop_rootcg_workfn
May 27 13:18:01.295549 kernel: Call Trace:
May 27 13:18:01.295585 kernel:  <TASK>
May 27 13:18:01.295597 kernel:  __schedule+0x495/0x13f0
May 27 13:18:01.296501 kernel:  ? zio_root+0x33/0x50 [zfs]
May 27 13:18:01.296541 kernel:  ? __pfx_zil_lwb_flush_vdevs_done+0x10/0x10 [zfs]
May 27 13:18:01.296556 kernel:  schedule+0x29/0x130
May 27 13:18:01.296709 kernel:  cv_wait_common+0x107/0x140 [spl]
May 27 13:18:01.296723 kernel:  ? __pfx_autoremove_wake_function+0x10/0x10
May 27 13:18:01.297434 kernel:  __cv_wait+0x15/0x30 [spl]
May 27 13:18:01.297485 kernel:  zil_commit_impl+0x324/0x14c0 [zfs]
May 27 13:18:01.298442 kernel:  zil_commit+0x3d/0x80 [zfs]
May 27 13:18:01.298480 kernel:  zfs_fsync+0xa5/0x140 [zfs]
May 27 13:18:01.298494 kernel:  zpl_fsync+0x10e/0x170 [zfs]
May 27 13:18:01.298504 kernel:  vfs_fsync+0x48/0x90
May 27 13:18:01.299532 kernel:  loop_process_work+0x2a5/0x3e0
May 27 13:18:01.299805 kernel:  loop_rootcg_workfn+0x1b/0x30
May 27 13:18:01.299820 kernel:  process_one_work+0x178/0x3b0
May 27 13:18:01.299830 kernel:  worker_thread+0x2b8/0x3e0
May 27 13:18:01.299839 kernel:  ? __pfx_worker_thread+0x10/0x10
May 27 13:18:01.299849 kernel:  kthread+0xfb/0x230
May 27 13:18:01.299858 kernel:  ? __pfx_kthread+0x10/0x10
May 27 13:18:01.300461 kernel:  ret_from_fork+0x44/0x70
May 27 13:18:01.300497 kernel:  ? __pfx_kthread+0x10/0x10
May 27 13:18:01.300510 kernel:  ret_from_fork_asm+0x1a/0x30
May 27 13:18:01.300535 kernel:  </TASK>

I will see if I can look into this a bit more closely soon, always harder without being able to reproduce this.
Could you already ensure that the server runs the latest available BIOS/Firmware version?
 
  • Like
Reactions: Szabi
Code:
dkms/stable,now 3.0.10-8+deb12u1 all [residual-config]

Should I try apt dkms purge? Then I will see what happens at the next update.

The "residual-config" part is a really good proof that dkms was indeed setup there once in the past.

And yes, you can purge the package, which should also remove the hooks into kernel updates and thus the message about missing headers you see. Because due to your system not relying on dkms anymore, you can safely remove those residual dkms configs and there's no need to install the headers.
 
Could you already ensure that the server runs the latest available BIOS/Firmware version?
Yeah, the server runs the latest System ROM P89 v3.40 (08/29/2024).

Still technically a supported server by HP.
EOL is just two month away, though.
 
Something quite bizarre about "6.14.5-1-bpo12-pve" - kernel. Since installing it I'm now unable to reboot the machine. It just gets stuck as it is rebooting. I have had to press the on button - to turn off the machine, and turn back on again. It starts up fine. only seems to be stuck with reboot.

I have had to amend the grub file config from 0 to,

Code:
GRUB_DEFAULT="Advanced options for Proxmox VE GNU/Linux>Proxmox VE GNU/Linux, with Linux 6.14.0-2-pve"

Then updated the grub and proxmox-boot-tool refresh.

Then attempted to reboot, and now I can reboot again fine. I don't know what the issue is with the "6.14.5-1-bpo12-pve" - kernel. if anyone can guide me on how to find the logs - etc processes, or report if you've bumped into a similar situation, perhaps its a bug. I don't know.
 
I'm having similar issues with this kernel version too. I have an lxc that runs cockpit to share a couple of mount points over SMB and other lxc's that mount the same raw images.
Sort of like this in for example, 153.conf (where 151 is the lxc running cockpit).
mp0: data:151/vm-151-disk-0.raw,mp=/data,size=2000G
mp1: data:151/vm-151-disk-1.raw,mp=/docker,backup=1,size=30G

After a while I notice a spike in CPU and IO usage by a process called kworker/u16:4+loop2. At that point, whatever lxc is using loop2 cannot be shutdown (I just get something like can't lock file '/run/lock/lxc/pve-config-153.lock' - got timeout (500)

And then the whole node can't be restarted - I need to physically unplug it. I've gone back to an earlier kernel version but waiting until I have some more free time before I start the containers.

Update: Rolled back to 6.14.0-2-pve and the issue so far is gone.
 
Last edited:
6.14.5 working fine here on 8th gen Intels, 13th gen raptor lake and an AMD 5950x.
On 13th gen and 5950x, both working fine with x2apic and apicv for intel and avic for amd. sr-iov working great, rdma is fine, intel-igpu and nvidia-gpu and broadcom raid cards passthrough all working sweet.

Looking forward to 6.16 :D

oh, I did have to rollback latest update to inet-tools package, because ifconfig shows zero stats for all nics in the new debian security update for that package... I didn't bother reporting it anywhere else, but yeah, had to rollback and then hold the package to prevent updates.
 
Last edited:
Hello, I think I've ran into an issue with Kernel 6.14 when using Realtek RTL8125 (2.5GbE) with low power ASPM C-States.

System:
Processor: Intel 14600k
Motherboard: ASUS Prime B760M-A AX
Storage: 4TB Crucial P3 NVME
RAM: 128GB DDR5

My system is configured to utilize low idle power (10W) by enabling ASPM on all devices. Since R8169 doesn't have this enabled by default, I enabled it via
sh -c "echo 1 > /sys/bus/pci/devices/0000:04:00.0/link/l1_aspm"


When using the 6.8 kernel with the in-tree r8169 driver: Everything works and I reach about C8 and full bandwidth1749610198549.png

When using 6.14 kernel with the in-tree r8169: I can achieve the same power consumption C8 states BUT the NIC performance tanks.
1749610687886.png



Interestingly enough, when I switched to the out-of-tree R8125 driver I experienced the same degraded NIC performance on both 6.8 and 6.14.

I'm really not sure if this is a motherboard, kernel or driver issue but I figured I'd document in case anyone else runs into this issue in the future. I'll be sticking to kernel 6.8 on proxmox 8.4 with the in-tree r8169 for now since I can achieve 10W idle states.
 
I have now spent 3 days trying to get VLAN working on a Minisforum bd790i motherboard with a Realtek 8125 NIC. If ONLY i wore smart enought to try "apt update && apt install proxmox-kernel-6.14 && reboot" i would have gotten it working.

8125 works fine with older kernel UNLESS you need VLAN working.
 
  • Like
Reactions: nautilus7