Kernel 5.4.44-2-pve problem

sebcbien

Member
Jun 7, 2020
44
9
13
53
For your information,
I just update/upgraded my system on a HP Microserver G8 with my boot/pve/local-lvm on an nvmve disk.
If I boot the last kernel installed (5.4.44-2-pve), I got a long list of errors scrolling for 1 min:
cannot process volume group pve
Volume group "pve" not found
1593827249630.png
1593827286861.png

Then this last screen:
1593827301394.png

Listed the /dev/mapper/ :
1593865450918.png
If I reboot and select kernel 5.4.44-1-pve, everything is fine.
Installed my proxmox with boot iso, almost all standard, ext4, 3 partitions etc.
 
Last edited:
  • Like
Reactions: Nazgile94
note:
As it seems "I'm the only one" to have this problem, I digged further this morning and found that before reading this thread today: https://forum.proxmox.com/threads/clean-old-kernels.42040/ I didn't know that
apt update && apt upgrade is not safe and that I should have used apt dist-upgrade instead.
Here is the kernel's installed:
Code:
dpkg --list|grep pve-kernel
ii  pve-firmware                         3.1-1                        all          Binary firmware code for the pve-kernel
ii  pve-kernel-5.4                       6.2-4                        all          Latest Proxmox VE Kernel Image
ii  pve-kernel-5.4.34-1-pve              5.4.34-2                     amd64        The Proxmox PVE Kernel Image
ii  pve-kernel-5.4.41-1-pve              5.4.41-1                     amd64        The Proxmox PVE Kernel Image
ii  pve-kernel-5.4.44-1-pve              5.4.44-1                     amd64        The Proxmox PVE Kernel Image
ii  pve-kernel-5.4.44-2-pve              5.4.44-2                     amd64        The Proxmox PVE Kernel Image
ii  pve-kernel-helper                    6.2-4                        all          Function for various kernel maintenance tasks.

I'm now booted on 5.4.44-1-pve and everything seems ok
Can I just do a apt update && apt dist-upgrade to clean this mess ?
Or should I remove first pve-kernel-5.4.44-2-pve ?
If I do:
Code:
apt dist-upgrade
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Calculating upgrade... Done
The following package was automatically installed and is no longer required:
  pve-kernel-5.4.41-1-pve
Use 'apt autoremove' to remove it.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
It seems it will do almost nothing, only remove an old Kernel.
Thanks
 
Last edited:
I tried to uninstall the last kernel to re-do it properly.
But I get this:
Code:
root@pve:~# apt purge pve-kernel-5.4.44-2-pve
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following package was automatically installed and is no longer required:
  pve-kernel-5.4.41-1-pve
Use 'apt autoremove' to remove it.
The following packages will be REMOVED:
  proxmox-ve* pve-kernel-5.4* pve-kernel-5.4.44-2-pve*
0 upgraded, 0 newly installed, 3 to remove and 0 not upgraded.
After this operation, 287 MB disk space will be freed.
Do you want to continue? [Y/n] y
W: (pve-apt-hook) !! WARNING !!
W: (pve-apt-hook) You are attempting to remove the meta-package 'proxmox-ve'!
W: (pve-apt-hook)
W: (pve-apt-hook) If you really want to permanently remove 'proxmox-ve' from your system, run the following command
W: (pve-apt-hook)       touch '/please-remove-proxmox-ve'
W: (pve-apt-hook) run apt purge proxmox-ve to remove the meta-package
W: (pve-apt-hook) and repeat your apt invocation.
W: (pve-apt-hook)
W: (pve-apt-hook) If you are unsure why 'proxmox-ve' would be removed, please verify
W: (pve-apt-hook)       - your APT repository settings
W: (pve-apt-hook)       - that you are using 'apt full-upgrade' to upgrade your system
E: Sub-process /usr/share/proxmox-ve/pve-apt-hook returned an error code (1)
E: Failure running script /usr/share/proxmox-ve/pve-apt-hook

Any advice would be greatly appreciated ;)
 
I have the same problem with kernel versions 5.4.44-2-pve and 5.4.44-1-pve. Kernel 5.4.41-1-pve is starting (and running) flawlessly.
I did however configure PCIe passthrough and blacklisted the module "mpt3sas". I suspect that I did something wrong there.

@sebcbien Do you have PCIe passthrough configured? Have you blacklisted any modules?
 
no, the only 'passtrough' are a disk and a usb to VM.
Those are "soft" passtrough, not the entire contollers
 
No, I haven't found a solution yet. I'm currently running 5.4.41-1. I can't reboot my machine right now but I do have an old pc laying around with which I will try to replicate the problem wiith if I have time next week.
 
I have the same problem. Since Kernel 5.4.44-1 pve my system doesn't boot. Kernel 5.4.41-1 works without problems.
In my configuration I use von a Intel J4105 passthrough of the iGPU (there is no extra discrete GPU) so I have no output on Screen till vm starts.

I hope for a solution.
 
Same problem with latest proxmox kernel versions here. Also with an ASRock J4105 ITX. As seen in the srceenshots of @sebcebien the lvm volume group cannot be found. A higher rootdelay does not help. I always have to cut power from the power supply. After that proxmox starts also with kernel 5.4.44-2-pve.
 
I also get a long list during boot that devices from /dev/sd[a-y] can't be found. It takes around 2min and then the boot completes.
 
i got exactly the same problem.
after update proxmox with the deb http://download.proxmox.com/debian/pve buster pve-no-subscription repo, which install the new proxmox kernel,
i hangup at boot with lvm/pve not found, exactly #1's problem.

helped with boot the older kernel - works fine
any solution at the moment for this?:/
 
nope, no raid even software or hardware.

Code:
root@proxmox:~# lsblk
NAME                       MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                          8:0    0 931.5G  0 disk
└─sda1                       8:1    0 931.5G  0 part /mnt/pve/Backups
sdb                          8:16   0 232.9G  0 disk
├─sdb1                       8:17   0  1007K  0 part
├─sdb2                       8:18   0   512M  0 part /boot/efi
└─sdb3                       8:19   0 229.5G  0 part
  ├─pve-swap               253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root               253:1    0  57.3G  0 lvm  /
  ├─pve-data_tmeta         253:2    0   1.5G  0 lvm 
  │ └─pve-data             253:4    0 145.3G  0 lvm 
  └─pve-data_tdata         253:3    0 145.3G  0 lvm 
    └─pve-data             253:4    0 145.3G  0 lvm 
nvme0n1                    259:0    0 465.8G  0 disk
├─VMs-VMs_tmeta            253:5    0   4.7G  0 lvm 
│ └─VMs-VMs-tpool          253:7    0 456.3G  0 lvm 
│   ├─VMs-VMs              253:8    0 456.3G  0 lvm 
│   ├─VMs-vm--101--disk--0 253:9    0     2G  0 lvm 
│   ├─VMs-vm--100--disk--0 253:10   0     3G  0 lvm 
│   ├─VMs-vm--104--disk--0 253:11   0     2G  0 lvm 
│   ├─VMs-vm--105--disk--0 253:12   0     8G  0 lvm 
│   ├─VMs-vm--108--disk--0 253:13   0    50G  0 lvm 
│   ├─VMs-vm--103--disk--0 253:14   0   160G  0 lvm 
│   ├─VMs-vm--106--disk--0 253:15   0    32G  0 lvm 
│   ├─VMs-vm--107--disk--0 253:16   0     8G  0 lvm 
│   ├─VMs-vm--109--disk--0 253:17   0     3G  0 lvm 
│   ├─VMs-vm--102--disk--0 253:18   0    10G  0 lvm 
│   ├─VMs-vm--102--disk--1 253:19   0   200G  0 lvm 
│   └─VMs-vm--110--disk--0 253:20   0    50G  0 lvm 
└─VMs-VMs_tdata            253:6    0 456.3G  0 lvm 
  └─VMs-VMs-tpool          253:7    0 456.3G  0 lvm 
    ├─VMs-VMs              253:8    0 456.3G  0 lvm 
    ├─VMs-vm--101--disk--0 253:9    0     2G  0 lvm 
    ├─VMs-vm--100--disk--0 253:10   0     3G  0 lvm 
    ├─VMs-vm--104--disk--0 253:11   0     2G  0 lvm 
    ├─VMs-vm--105--disk--0 253:12   0     8G  0 lvm 
    ├─VMs-vm--108--disk--0 253:13   0    50G  0 lvm 
    ├─VMs-vm--103--disk--0 253:14   0   160G  0 lvm 
    ├─VMs-vm--106--disk--0 253:15   0    32G  0 lvm 
    ├─VMs-vm--107--disk--0 253:16   0     8G  0 lvm 
    ├─VMs-vm--109--disk--0 253:17   0     3G  0 lvm 
    ├─VMs-vm--102--disk--0 253:18   0    10G  0 lvm 
    ├─VMs-vm--102--disk--1 253:19   0   200G  0 lvm 
    └─VMs-vm--110--disk--0 253:20   0    50G  0 lvm
 
yep, seen.

but i dont use software/hardware raid.

960evo for the VM
850Evo for the Host and templates
1tb hdd for Backups

no raid.
if i boot preversious kernel, all works out of the box

edit:
kann auch deutsch, falls das weiterhilft :)
 
Last edited:
Do you use UEFI secure boot?

There where only two changes between 5.4.44-1-pve and 5.4.44-2-pve:
* a fix for cloning network sockets and cgroups, hardly to be hit in boot at all, IMO very unlikely the culprit
* disabling the CONFIG_SECURITY_LOCKDOWN_LSM and CONFIG_SECURITY_LOCKDOWN_LSM_EARLY config options to fix ZFS on secureboot, if that would be more likely the culprit...
 
  • Like
Reactions: Nazgile94
hmmm i dk, i will check it soon and will report :D

funny thing is, i didnt use ZFS so secureboot shouldnt be the issue, but i give it a try :D
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!