4.15 based test kernel for PVE 5.x available

I am running identical hardware and funny enough VM config with Hyper-V at the moment with the host partition being my gaming VM. I have been waiting for the same pcie java fix to make a move to this hypervisor/VM config, has this fix dropped by chance? If so what has your experience been?

Also a question for the dev team, once the 4.15 kernel is labeled stable, how easy will it be to switch an install to the new branch?

you can already install it now by running "apt install pve-kernel-4.15". once it is made the default, you don't need to switch manually as it will be automatically be pulled in by the 'proxmox-ve' meta package.
 
you can already install it now by running "apt install pve-kernel-4.15". once it is made the default, you don't need to switch manually as it will be automatically be pulled in by the 'proxmox-ve' meta package.

Fantastic! Glad to hear it will be a simple ordeal.
 
Sometimes if i say "reboot" in an LXC the whole server did not response anymore. On the clusternode on all VM's, containers and storages there is an questionmark. On journallog there is nothing showy. PCT Command did nothing, only an never ending timeout. Only an reboot did help here.
 
Sometimes if i say "reboot" in an LXC the whole server did not response anymore. On the clusternode on all VM's, containers and storages there is an questionmark. On journallog there is nothing showy. PCT Command did nothing, only an never ending timeout. Only an reboot did help here.

if you can reproduce this, can you obtain more details such as
  • pveversion -v
  • container and storage configs
  • sys-rq traces of all tasks
  • output of an LXC debug log file obtained by starting the container in foreground mode with lxc-start
and open a bug report with all of the above?
 
Sorry @fabian , at this weeks no time for it. I rolled back to kernel 4.10 the little homecluster ;)
What i can remember
- it crashes most on start/shutdown/reboot
- if crash on the clusternode on every icon there is an questionmark
- no errors in journal
- it was on all containers that i used this time for develop (all had an mountpoint to an local zfsdataset)

What can i do to fix it? A reboot did not help really, the machine is hanging with no timeout on stopping VMs/Container. What works is to search with "ps" for the process and kill it.

Hope this help a little bit.
 
I am running identical hardware and funny enough VM config with Hyper-V at the moment with the host partition being my gaming VM. I have been waiting for the same pcie java fix to make a move to this hypervisor/VM config, has this fix dropped by chance? If so what has your experience been?

Also a question for the dev team, once the 4.15 kernel is labeled stable, how easy will it be to switch an install to the new branch?

You still need the java fix to run manually or via cron. The 4.15 kernel on that matter won't change anything.
 
Last edited:
Seems like I can't boot with pve-kernel-4.15.15-1-pve on a HP DL120 G7 / P410 Raid array:
Giving up waiting for root file system

pve-kernel-4.15.3-1-pve and pve-kernel-4.15.10-1-pve are working fine.
 
Seems like I can't boot with pve-kernel-4.15.15-1-pve on a HP DL120 G7 / P410 Raid array:
Giving up waiting for root file system

pve-kernel-4.15.3-1-pve and pve-kernel-4.15.10-1-pve are working fine.

can you re-try with increased rootdelay boot parameter?
 
in other news, pve-kernel-4.15 is now the default kernel in pve-no-subscription!
 
pve-kernel-4.15.15-1-pve does NOT work for Dell R740 and R420.
Same as UDO and PFOO.
>in other news, pve-kernel-4.15 is now the default kernel in pve-no-subscription!

Why did you make a broken kernel the new default ?
 
pve-kernel-4.15.15-1-pve does NOT work for Dell R740 and R420.
Same as UDO and PFOO.
>in other news, pve-kernel-4.15 is now the default kernel in pve-no-subscription!

Why did you make a broken kernel the new default ?

because at some point, we need to switch (the new kernel has a lot of issues fixed which the 4.13 kernels still have, and having a kernel that never shows any issues on all possible hardware is an illusion). the above known issue will hopefully be fixed with the next round of kernel upgrades. for the time being, simply boot the old working kernel.
 
Thanks Fabian.
I did start older kernel 4.13.16-2, so no problem at the moment.

How do i blacklist this specific kernel version ?(incase there is a reboot in the middle of night)
 
How do i blacklist this specific kernel version ?(incase there is a reboot in the middle of night)
If you have two kernels installed, so one 4.13 and one 4.15, edit the file and update Grub.
Code:
cat /etc/default/grub
...
GRUB_DEFAULT="1>1"
#GRUB_DEFAULT=0
...
Code:
update-grub
 
  • Like
Reactions: elmacus
Hello,

I can't import my rpool on boot with the new kernel 4.15.15-1.
Code:
zpool import -N -f rpool

But work perfectly with kernel 4.13.13-6.

Best,
Alexis
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!