Opt-in Linux 6.5 Kernel with ZFS 2.2 for Proxmox VE 8 available on test & no-subscription

I had similar issues. Eventually it boots past the EFI stub, but then scrolls endless mptsas3 errors, presumably related to the LSI SAS HBA. Rock solid on ZFS 2.1.13 and kernel 6.2. I rolled everything back and was fine.
The mini sas LSI cards in these R340's we have are H330's in jbod mode. In the PVE 6.2 kernels the module it uses is megaraid_sas
 
im starting a new proxmox server. Could I just add the test repo, install this latest version and change back to the „normal repo“ and as soon as there’s a newer version available it will update but it won’t downgrade?
Yes, that can work (the apt package manager newer downgrades automatically by default).
 
Is there an „Proxmox VE Installer ISO“ with this 6.5 kernel available as well?

I want to start a new server and would like to use proxmox with this kernel since it has optimisations for my hardware


Or should I - like you already answered before - install „the old, stable“ version first and upgrade via test repo and change the repo back?
 
Last edited:
something weird happened while updating to the kernel 6.5...

when upgrading to kernel version 6.5.11-2 (from kernel 6.2.16-15) everything went smoothly.... rebooted the server... everything was looking fine, no relevant errors in the logs...

then I noticed a new kernel version 6.5.11-3... upgraded... rebooted.... nothing... the server didn't come online so I had to plugin a monitor and keyboard just to find that the network interfaces were renamed and down.... looking at log on previous logs I noticed this:

Nov 17 12:37:45 leserver2 kernel: r8169 0000:02:00.0 enp2s0: renamed from eth0
Nov 17 12:37:45 leserver2 kernel: r8169 0000:03:00.0 enp3s0: renamed from eth1

the same on the previous reboot (kernel 6.2.16-15)
Oct 11 11:49:38 leserver2 kernel: r8169 0000:02:00.0 enp2s0: renamed from eth0
Oct 11 11:49:38 leserver2 kernel: r8169 0000:03:00.0 enp3s0: renamed from eth1

but that didn't happen after upgrading to 6.5.11-3... the interfaces remained as eth0 and eth1

so I had to manually rename the interfaces to eth0 and eth1 in "/etc/network/interfaces" and that's it, the system has been running fine since then... however I'm concern if in a future version of the kernel the interfaces are going to be renamed again

thanks
 
Last edited:
something weird happened while updating to the kernel 6.5...

when upgrading to kernel version 6.5.11-2 (from kernel 6.2.16-15) everything went smoothly.... rebooted the server... everything was looking fine, no relevant errors in the logs...

then I noticed a new kernel version 6.5.11-3... upgraded... rebooted.... nothing... the server didn't come online so I had to plugin a monitor and keyboard just to find that the network interfaces were renamed and down.... looking at log on previous logs I noticed this:

Nov 17 12:37:45 leserver2 kernel: r8169 0000:02:00.0 enp2s0: renamed from eth0
Nov 17 12:37:45 leserver2 kernel: r8169 0000:03:00.0 enp3s0: renamed from eth1

the same on the previous reboot (kernel 6.2.16-15)
Oct 11 11:49:38 leserver2 kernel: r8169 0000:02:00.0 enp2s0: renamed from eth0
Oct 11 11:49:38 leserver2 kernel: r8169 0000:03:00.0 enp3s0: renamed from eth1

but that didn't happen after upgrading to 6.5.11-3... the interfaces remained as eth0 and eth1

so I had to manually rename the interfaces to eth0 and eth1 in "/etc/network/interfaces" and that's it, the system has been running fine since then... however I'm concern if in a future version of the kernel the interfaces are going to be renamed again

thanks
Same exact issue with me. But on 6.5.11-3
 
  • Like
Reactions: pschneider1968
Thanks for your feedback, but that 6.5.11-3 would be the cause of this seems very odd to me, because that version includes only a few targeted ZFS fixes compared to the previous 6.5.11-2, as you can see from the git log: https://git.proxmox.com/?p=pve-kernel.git;a=summary

The 6.5.11-2 kernel includes a bit more, but not _that_ much, but we will re-check the changes a bit more closely.

Can you also post some details about your server (motherboard/system model, network interfaces details, ...) and re-check that just booting into one kernel vs. the slightly older, without a single other change to the system causes this?
 
  • Like
Reactions: pschneider1968
same here ... network does not start properly ...
2xIntel I210
2xIntel E810-XXV
asrock X570D4U
both were unavailable after booting into 6.5.11-3
 
  • Like
Reactions: pschneider1968
something weird happened while updating to the kernel 6.5...

when upgrading to kernel version 6.5.11-2 (from kernel 6.2.16-15) everything went smoothly.... rebooted the server... everything was looking fine, no relevant errors in the logs...

then I noticed a new kernel version 6.5.11-3... upgraded... rebooted.... nothing... the server didn't come online so I had to plugin a monitor and keyboard just to find that the network interfaces were renamed and down.... looking at log on previous logs I noticed this:

Nov 17 12:37:45 leserver2 kernel: r8169 0000:02:00.0 enp2s0: renamed from eth0
Nov 17 12:37:45 leserver2 kernel: r8169 0000:03:00.0 enp3s0: renamed from eth1

the same on the previous reboot (kernel 6.2.16-15)
Oct 11 11:49:38 leserver2 kernel: r8169 0000:02:00.0 enp2s0: renamed from eth0
Oct 11 11:49:38 leserver2 kernel: r8169 0000:03:00.0 enp3s0: renamed from eth1

but that didn't happen after upgrading to 6.5.11-3... the interfaces remained as eth0 and eth1

so I had to manually rename the interfaces to eth0 and eth1 in "/etc/network/interfaces" and that's it, the system has been running fine since then... however I'm concern if in a future version of the kernel the interfaces are going to be renamed again

thanks
Thank's, Me too.
I spoke about this problem on Twitter last night with Alexandre.
Moula.
 
  • Like
Reactions: pschneider1968
Thanks for your feedback, but that 6.5.11-3 would be the cause of this seems very odd to me, because that version includes only a few targeted ZFS fixes compared to the previous 6.5.11-2, as you can see from the git log: https://git.proxmox.com/?p=pve-kernel.git;a=summary

The 6.5.11-2 kernel includes a bit more, but not _that_ much, but we will re-check the changes a bit more closely.

Can you also post some details about your server (motherboard/system model, network interfaces details, ...) and re-check that just booting into one kernel vs. the slightly older, without a single other change to the system causes this?
I have a dell r630 and dell r730. Both have a broadcom 10gig and a connectx3 40gig. I can literally boot onto 6.5.11-2 and everything be perfect. All of my interfaces have the same schema they were set up with. When I boot into 6.5.11-3 the schema matching the config changes and instead of eno1 -4 and enps0f0 etc. The ip a command does interface eth0 - eth7
 
Ack, I can now reproduce it on one server here too, interestingly I see this on one node in an identical three node cluster, the same cluster where it worked on another of those three nodes when testing the kernel yesterday.
I will look into it, especially what the differences are.
 
Similar issues here on AMD GX-420GI with RTL8111/8168/8411 PCI Express Gigabit Ethernet. However, unlike other posters, I was only successful in re-booting into 6.5.11-2 once with functional network - repeated reboots into either kernel now result in lack of network.

Unfortunately I'm unable to troubleshoot further at this point since my shell is somehow unresponsive to keyboard inputs? (though editing boot kernel parameters works just fine)
 
I think I've got it.
The kernel is a red herring, the cause is the new systemd default link policy shipped by the pve-manager package bumped yesterday.

E.g., if you add the following to lines below the [link] section in /usr/lib/systemd/network/98-proxmox-ve-default.link and reboot the kernel should not matter again anymore.

Code:
NamePolicy=keep kernel database onboard slot path
AlternativeNamesPolicy=database onboard slot path

This puts also back some sense in why my kernel testing yesterday didn't observe this already, I only installed the new kernel manually, not pulling in the new default-link yet (that was bumped only later).

We'll look into handling the default better, i.e., go back to the 99-deault-link.d snippet approach where all configs are merged, or take in above properties in ours – will need to recheck the discussion I had with a colleague (who favored the separate file a bit more).

Rebooting into the old kernel seems to make udev re-use the name that was previously assigned to that, thus this was seemingly fix an issue and make it look like the kernel is at fault, while it really wasn't (that's my working theory, will focus on the fix before checkling that more closely).
 
Last edited:
I was only successful in re-booting into 6.5.11-2 once with functional network - repeated reboots into either kernel now result in lack of network.
Yes, that makes somewhat sense as the kernel really isn't at fault here, see my above post.
 
  • Like
Reactions: pschneider1968
I think I've got it.
The kernel is a red herring, the cause is the new systemd default link policy shipped by the pve-manager package bumped yesterday.

E.g., if you adde the following to lines below the [link] section in /usr/lib/systemd/network/98-proxmox-ve-default.link and reboot the kernel should not matter again anymore.

Code:
NamePolicy=keep kernel database onboard slot path
AlternativeNamesPolicy=database onboard slot path

This puts also back some sense in why my kernel testing yesterday didn't observe this already, I only installed the new kernel manually, not pulling in the new default-link yet (that was bumped only later).

We'll look into handling the default better, i.e., go back to the 99-deault-link.d snippet approach where all configs are merged, or take in above properties in ours – will need to recheck the discussion I had with a colleague (who favored the separate file a bit more).

Rebooting into the old kernel seems to make udev re-use the name that was previously assigned to that, thus this was seemingly fix an issue and make it look like the kernel is at fault, while it really wasn't (that's my working theory, will focus on the fix before checkling that more closely).
Do I need to manually add this or should I stay on 6.5.11-2 and wait for update?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!