Kernel 5.4.44-2-pve problem

I have to say that the initial post of this made probably a typo.
They state that 5.4.44-2 is broken and 5.4.44-1 works, but I think they meant to write 5.4.41-1...


So what storage do you all use, mdraid? As all normal LVM setups boot just fine here, on SSDs, NVMe and spinners.
 
Is use lvm on host on A ssd
Lvm thin for vms on A nvme
Dir for Backups in A hdd

No raid

Kernel 5.4.34-1 pve works
Kernel 5.4.44-2 pve didnt.

It hangs up and Boot into busybox

After A lot of "cannot process Volume Group pve
Volume Group pve Not found"

IT quirs with busybox initramfs shell

Last words are "
Gave up waitinf for root file System Device
...

Alert! /dev/mapper/pve-root does Not exist, dropping to A shell.

But IT 100% exists

Code:
root@proxmox:~# ls /dev/mapper
control         pve-swap              VMs-vm--103--disk--0  VMs-vm--108--disk--0  VMs-vm--113--disk--0  VMs-VMs-tpool
pve-data        VMs-vm--100--disk--0  VMs-vm--104--disk--0  VMs-vm--109--disk--0  VMs-vm--114--disk--0
pve-data_tdata  VMs-vm--101--disk--0  VMs-vm--105--disk--0  VMs-vm--110--disk--0  VMs-VMs
pve-data_tmeta  VMs-vm--102--disk--0  VMs-vm--106--disk--0  VMs-vm--111--disk--0  VMs-VMs_tdata
pve-root        VMs-vm--102--disk--1  VMs-vm--107--disk--0  VMs-vm--112--disk--0  VMs-VMs_tmeta

MAYBE!! i need to edit the /etc/lvm/lvm.conf global filter?

Code:
global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|" "r|/dev/mapper/.*-(vm|base)--[0-9]+--disk--[0-9]+|", "r|/dev/s$
and command out the "dev/mapper"pve-.*"?

i just installed proxmox with the image and "basic settings" :D no special things at installing host.

Code:
root@proxmox:~# lsblk
NAME                       MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                          8:0    0 931.5G  0 disk
└─sda1                       8:1    0 931.5G  0 part /mnt/pve/Backups
sdb                          8:16   0 232.9G  0 disk
├─sdb1                       8:17   0  1007K  0 part
├─sdb2                       8:18   0   512M  0 part /boot/efi
└─sdb3                       8:19   0 229.5G  0 part
  ├─pve-swap               253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root               253:1    0  57.3G  0 lvm  /
  ├─pve-data_tmeta         253:2    0   1.5G  0 lvm 
  │ └─pve-data             253:4    0 145.3G  0 lvm 
  └─pve-data_tdata         253:3    0 145.3G  0 lvm 
    └─pve-data             253:4    0 145.3G  0 lvm 
nvme0n1                    259:0    0 465.8G  0 disk
├─VMs-VMs_tmeta            253:5    0   4.7G  0 lvm 
│ └─VMs-VMs-tpool          253:7    0 456.3G  0 lvm 
│   ├─VMs-VMs              253:8    0 456.3G  0 lvm 
│   ├─VMs-vm--101--disk--0 253:9    0     2G  0 lvm 
│   ├─VMs-vm--100--disk--0 253:10   0     3G  0 lvm 
│   ├─VMs-vm--104--disk--0 253:11   0     2G  0 lvm 
│   ├─VMs-vm--105--disk--0 253:12   0     8G  0 lvm 
│   ├─VMs-vm--108--disk--0 253:13   0    50G  0 lvm 
│   ├─VMs-vm--103--disk--0 253:14   0   160G  0 lvm 
│   ├─VMs-vm--106--disk--0 253:15   0    32G  0 lvm 
│   ├─VMs-vm--107--disk--0 253:16   0     8G  0 lvm 
│   ├─VMs-vm--109--disk--0 253:17   0     3G  0 lvm 
│   ├─VMs-vm--102--disk--0 253:18   0    10G  0 lvm 
│   ├─VMs-vm--102--disk--1 253:19   0   200G  0 lvm 
│   ├─VMs-vm--110--disk--0 253:20   0    50G  0 lvm 
│   ├─VMs-vm--111--disk--0 253:21   0    32G  0 lvm 
│   ├─VMs-vm--112--disk--0 253:22   0    32G  0 lvm 
│   ├─VMs-vm--113--disk--0 253:23   0    32G  0 lvm 
│   └─VMs-vm--114--disk--0 253:24   0    32G  0 lvm 
└─VMs-VMs_tdata            253:6    0 456.3G  0 lvm 
  └─VMs-VMs-tpool          253:7    0 456.3G  0 lvm 
    ├─VMs-VMs              253:8    0 456.3G  0 lvm 
    ├─VMs-vm--101--disk--0 253:9    0     2G  0 lvm 
    ├─VMs-vm--100--disk--0 253:10   0     3G  0 lvm 
    ├─VMs-vm--104--disk--0 253:11   0     2G  0 lvm 
    ├─VMs-vm--105--disk--0 253:12   0     8G  0 lvm 
    ├─VMs-vm--108--disk--0 253:13   0    50G  0 lvm 
    ├─VMs-vm--103--disk--0 253:14   0   160G  0 lvm 
    ├─VMs-vm--106--disk--0 253:15   0    32G  0 lvm 
    ├─VMs-vm--107--disk--0 253:16   0     8G  0 lvm 
    ├─VMs-vm--109--disk--0 253:17   0     3G  0 lvm 
    ├─VMs-vm--102--disk--0 253:18   0    10G  0 lvm 
    ├─VMs-vm--102--disk--1 253:19   0   200G  0 lvm 
    ├─VMs-vm--110--disk--0 253:20   0    50G  0 lvm 
    ├─VMs-vm--111--disk--0 253:21   0    32G  0 lvm 
    ├─VMs-vm--112--disk--0 253:22   0    32G  0 lvm 
    ├─VMs-vm--113--disk--0 253:23   0    32G  0 lvm 
    └─VMs-vm--114--disk--0 253:24   0    32G  0 lvm 
root@proxmox:~#
 
Last edited:
I have to say that the initial post of this made probably a typo.
They state that 5.4.44-2 is broken and 5.4.44-1 works, but I think they meant to write 5.4.41-1...
MAYBE!! i need to edit the /etc/lvm/lvm.conf global filter?

No, the initramfs doesn't look at that, it rather seems like a kernel issue - just trying to find the common factor so that we can try to reproduce it here, or pin it down to a specific change...
 
  • Like
Reactions: Nazgile94
do u need any informations from me?

i just updated via proxmox gui, reboot and it hangs with the issue.
nothing done before :D

and sry for my bad english, not my native language.
cant upload smartphone pictures , files to big haha need to typ all done and switch to desktop hmpf haha

the typo issue:

i agree with u.

i was used the
5.4.34-1 pve before i updated to
5.4.44-2 pve

x.x.34-1 works
5.4.44-2 not x)

here are my host speccs:

Asrock A300 barebone
R5 3400G cpu
32GB ddr4 3000 cl16
256samsung ssd for host
512nvme for VMs
1tb HDD for backups
 
Just wanted to report back:
I installed Proxmox on a different machine I had laying around (with completely different hardware, though) but could not reproduce the problem. Kernel 5.4.44-2 is running flawlessly on that machine. It could be depending on the hardware.

The machine on which 5.4.44-1 and 5.4.44-2 are not running on has the following specs:
2x Intel Xeon E5-2630 v3
128GB DDR4 ECC RAM
1x Avago LSI SAS 9207-8e (SAS controller, passed through to a VM)
1x Samsung 256GB SSD 860 Pro for host/root file system
1x Samsung 512GB SSD 860 Pro for VMs
Some HDDs for extra space, backups, ...

I installed cryptsetup for encrypted backups (https://forum.proxmox.com/threads/encrypt-backup.8494/), which I also installed on the test machine. During the installation of cryptsetup initramfs is executed and the following warning is displayed:
Code:
cryptsetup: WARNING: The initramfs image may not contain cryptsetup binaries
    nor crypto modules. If that's on purpose, you may want to uninstall the
    'cryptsetup-initramfs' package in order to disable the cryptsetup initramfs
    integration and avoid this warning.

The test machine is running fine so this should not be the cause of the problem.

In /etc/default/grub I enabled IOMMU and set the rootdelay to 10 (I also tested higher values without success):
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on rootdelay=10"

Oh, and a few weeks ago (after 5.4.44-1 was released) I managed to break all installed kernel versions by running update-initramfs -u -k all after fiddling with PCIe pass through. I reinstalled proxmox after that incident. I think I did something wrong there so not sure if this is related to the problem.
 
Last edited:
but the question is still, what majory changed between the kernels.. i use amd 3400G and cant use the new kernel, but the older works fine! so any change between the versions, caused the problem.
so i need to know, what settings i need to change on my host /maybe uefi
or its 100% just a kernel issue...
questions over questions^^
 
i tested to deactive immou in uefi and reset is to "default" settings. but still same problem with pve not found.

no pci passtrough activated on any kvm
 
I have to say that the initial post of this made probably a typo.
They state that 5.4.44-2 is broken and 5.4.44-1 works, but I think they meant to write 5.4.41-1...


No, the initramfs doesn't look at that, it rather seems like a kernel issue - just trying to find the common factor so that we can try to reproduce it here, or pin it down to a specific change...

any news ?:/
 
any news ?:/
three things you still could check:
* any information in `dmesg` from the initramfs shell, which indicate why it does not find the volume group (e.g. does it detect all blockdevices?, any errors related to storage controllers?,...)
* are the disk-device-nodes present? (`ls /dev`, `blkid` in the initramfs shell)
* last but not least (the hint above with the cryptutils reminded me of a similar issue) - maybe something changed in the generation of the initramfs
** boot a live-cd and compare both initrd images (the one for the working kernel, and the one for the broken one) - with a debian(like) system you can use `unmkinitramfs` to unpack them and look for differences

I hope this helps!
 
  • Like
Reactions: Loras
@Stoiko Ivanov Thank you for the suggestions.

I copied initrd.img-5.4.41-1-pve and initrd.img-5.4.44-1-pve from my server to my workstation, unpacked both files into two directories and compared both directories with diff. The file etc/lvm/lvm.conf is different (I edited it to exclude to disks from the lvm stat scan so they spin down after some time):

Code:
$ diff 41/etc/lvm/lvm.conf 44/etc/lvm/lvm.conf
129c129
<     global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|" "r|/dev/mapper/.*-(vm|base)--[0-9]+--disk--[0-9]+|"]
---
>     global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|" "r|/dev/mapper/.*-(vm|base)--[0-9]+--disk--[0-9]+|", "r|/dev/sde*|", "r|/dev/sdf*|" ]

I will revert the change tomorrow or saturday and check if this was the cause of the problem.

@Nazgile94 Did you edit etc/lvm/lvm.conf, too? Or do you have the original file?

Edit: Just for completeness sake: There was another file that was different: usr/lib/modprobe.d/blacklist_pve-kernel-5.4.41-1-pve.conf and /usr/lib/modprobe.d/blacklist_pve-kernel-5.4.44-1-pve.conf. But in those two files only two lines are at a different position:
Code:
$ diff 41/usr/lib/modprobe.d/blacklist_pve-kernel-5.4.41-1-pve.conf 44/usr/lib/modprobe.d/blacklist_pve-kernel-5.4.44-1-pve.conf
16a17,18
> blacklist iTCO_vendor_support
> blacklist iTCO_wdt
23,24d24
< blacklist iTCO_vendor_support
< blacklist iTCO_wdt
I don't think the position is important.
 
@Loras

my lvm.conf is the default one - nothing edited there.

edit: my lvm conf line is this:

global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|" "r|/dev/mapper/.*-(vm|base)--[0-9$--[0-9]+--disk--[0-9]+|", "r|/dev/sda*|" ]

if i think a good time back, i belive i changed something there to remove the permanent spinup from my backup hdd .. but i dk anymore
so i cant say if this is 100% original :D
 
Last edited:
I copied initrd.img-5.4.41-1-pve and initrd.img-5.4.44-1-pve from my server to my workstation, unpacked both files into two directories and compared both directories with diff. The file etc/lvm/lvm.conf is different (I edited it to exclude to disks from the lvm stat scan so they spin down after some time):
that could very well be the cause in your case! - please keep us posted :)
 
that could very well be the cause in your case! - please keep us posted :)
global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|" "r|/dev/mapper/.*-(vm|base)--[0-9$--[0-9]+--disk--[0-9]+|", "r|/dev/sda*|" ] is my lvm.conf

but i miss dev/sdb!! there where is my root partitions..

Code:
root@proxmox:~# lsblk
NAME                       MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                          8:0    0 931.5G  0 disk
└─sda1                       8:1    0 931.5G  0 part /mnt/pve/Backups
sdb                          8:16   0 232.9G  0 disk
├─sdb1                       8:17   0  1007K  0 part
├─sdb2                       8:18   0   512M  0 part /boot/efi
└─sdb3                       8:19   0 229.5G  0 part
  ├─pve-swap               253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root               253:1    0  57.3G  0 lvm  /
  ├─pve-data_tmeta         253:2    0   1.5G  0 lvm
  │ └─pve-data             253:4    0 145.3G  0 lvm
  └─pve-data_tdata         253:3    0 145.3G  0 lvm
    └─pve-data             253:4    0 145.3G  0 lvm
nvme0n1                    259:0    0 465.8G  0 disk
├─VMs-VMs_tmeta            253:5    0   4.7G  0 lvm
│ └─VMs-VMs-tpool          253:7    0 456.3G  0 lvm
│   ├─VMs-VMs              253:8    0 456.3G  0 lvm
│   ├─VMs-vm--101--disk--0 253:9    0     2G  0 lvm
│   ├─VMs-vm--100--disk--0 253:10   0     3G  0 lvm
│   ├─VMs-vm--104--disk--0 253:11   0     2G  0 lvm
│   ├─VMs-vm--105--disk--0 253:12   0     8G  0 lvm
│   ├─VMs-vm--108--disk--0 253:13   0    50G  0 lvm
│   ├─VMs-vm--103--disk--0 253:14   0   160G  0 lvm
│   ├─VMs-vm--106--disk--0 253:15   0    32G  0 lvm
│   ├─VMs-vm--107--disk--0 253:16   0     8G  0 lvm
│   ├─VMs-vm--109--disk--0 253:17   0     3G  0 lvm
│   ├─VMs-vm--102--disk--0 253:18   0    10G  0 lvm
│   ├─VMs-vm--102--disk--1 253:19   0   200G  0 lvm
│   ├─VMs-vm--110--disk--0 253:20   0    50G  0 lvm
│   ├─VMs-vm--111--disk--0 253:21   0    32G  0 lvm
│   ├─VMs-vm--112--disk--0 253:22   0    32G  0 lvm
│   ├─VMs-vm--113--disk--0 253:23   0    32G  0 lvm
│   └─VMs-vm--114--disk--0 253:24   0    32G  0 lvm
└─VMs-VMs_tdata            253:6    0 456.3G  0 lvm
  └─VMs-VMs-tpool          253:7    0 456.3G  0 lvm
    ├─VMs-VMs              253:8    0 456.3G  0 lvm
    ├─VMs-vm--101--disk--0 253:9    0     2G  0 lvm
    ├─VMs-vm--100--disk--0 253:10   0     3G  0 lvm
    ├─VMs-vm--104--disk--0 253:11   0     2G  0 lvm
    ├─VMs-vm--105--disk--0 253:12   0     8G  0 lvm
    ├─VMs-vm--108--disk--0 253:13   0    50G  0 lvm
    ├─VMs-vm--103--disk--0 253:14   0   160G  0 lvm
    ├─VMs-vm--106--disk--0 253:15   0    32G  0 lvm
    ├─VMs-vm--107--disk--0 253:16   0     8G  0 lvm
    ├─VMs-vm--109--disk--0 253:17   0     3G  0 lvm
    ├─VMs-vm--102--disk--0 253:18   0    10G  0 lvm
    ├─VMs-vm--102--disk--1 253:19   0   200G  0 lvm
    ├─VMs-vm--110--disk--0 253:20   0    50G  0 lvm
    ├─VMs-vm--111--disk--0 253:21   0    32G  0 lvm
    ├─VMs-vm--112--disk--0 253:22   0    32G  0 lvm
    ├─VMs-vm--113--disk--0 253:23   0    32G  0 lvm
    └─VMs-vm--114--disk--0 253:24   0    32G  0 lvm
root@proxmox:~#

maybe i need to add them?? with
"r|/dev/sdb*|"] ?
 
r|/dev/sda*|"
this is probably the issue - the global filter uses regular expressions and not shell-globs:
`sda*` also matches `sd` and thus `sdb`
Code:
global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|" "r|/dev/mapper/.*-(vm|base)--[0-9]+--disk--[0-9]+|"]
try resetting the filter to the default - and regenerate the initrd (only for the current 5.4.44-2 kernel, so that the old ones don't get overwritten with a potentially broken one:
Code:
 update-initramfs -k 5.4.44-2-pve -u
and reboot
 
this is probably the issue - the global filter uses regular expressions and not shell-globs:
`sda*` also matches `sd` and thus `sdb`
Code:
global_filter = [ "r|/dev/zd.*|", "r|/dev/mapper/pve-.*|" "r|/dev/mapper/.*-(vm|base)--[0-9]+--disk--[0-9]+|"]
try resetting the filter to the default - and regenerate the initrd (only for the current 5.4.44-2 kernel, so that the old ones don't get overwritten with a potentially broken one:
Code:
update-initramfs -k 5.4.44-2-pve -u
and reboot
I die something wrong, now both Kernels hangs with pve Not found, Maybe i have wrotten something wrong :/

Have A Debian live ISO, Can u Maybe explain me fast, How to chroot in the proxmox envoriment, so i Can change the lvm conf back and Do the Update init. Command? (didnt Do such Things before^^ be my little teacher pls haha)
 
  • Like
Reactions: Nazgile94
checkout this link for activating the lv:
https://documentation.online.net/en/dedicated-server/rescue/mount-lvm-partition
checkout this for the necessary bindmounts and chrooting:
https://wiki.debian.org/RescueLive

fix the global_filter line - then run 'pvs', 'vgs' , 'lvs' to see whether the config is ok
then run the update-initramfs command from above

I hope this helps!
IT worksssssssssssssssssssssssssssssssssssssss

Just removed the sda* line and reset to default.
Then Updated the init and now IT boots the Kernel woopwoopwoop finaly haha
Thank you A lot!
Any solution to safety remove the old Kernel? If i purge IT, apt want to delete the pvemanager too

THANK YOU SO MUTCH! :D
this little tiny code haha

but now.. my hdd will spin 24/7 and dont spindown, right ?:/

why the command works on the old kernel and not on the new? hmm

but tyyyyy <3 *kiss u
Last login: Thu Jul 16 18:17:24 CEST 2020 from 192.168.178.43 on pts/10
Linux proxmox 5.4.44-2-pve #1 SMP PVE 5.4.44-2 (Wed, 01 Jul 2020 16:37:57 +0200) x86_64


i was used this tutorial before:
https://forum.proxmox.com/threads/disk-prevent-from-spinning-down-because-of-pvestatd.53237/
 
why the command works on the old kernel and not on the new?
could have many reasons - but did you update-initramfs and reboot after changing the /etc/lvm/lvm.conf -because with this line as it stands I doubt that the system would boot

if you want to keep disks spun down you could try to explicitly list their stable paths '/dev/disk/by-id/ata-....' - haven't tried it myself, so be careful and keep a rescue cd at hand - but if it works that should be stable across kernel upgrades.

I hope this helps!
 
  • Like
Reactions: Loras and Nazgile94
no didint update initramfs, but i have learnd a lot from this thread for my future haha

thank you so mutch!
works with the disk/by-id =)

ur great! keep on!
 
  • Like
Reactions: Stoiko Ivanov

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!