Kernel version missmatch?

flexu

New Member
Nov 16, 2022
17
1
3
Hey, I've been running into an issue regarding kernel versions lately,
I tried to install the hpsa driver using dkms and I got an error:

Code:
~# dkms build -m hpsa -v 3.4.20
Error! Your kernel headers for kernel 4.15.18-12-pve cannot be found.
Please install the linux-headers-4.15.18-12-pve package,
or use the --kernelsourcedir option to tell DKMS where it's located

But the kernel version I'm supposed to be running is:

Code:
pveversion -v |grep kernel
proxmox-ve: 6.4-1 (running kernel: 4.15.18-12-pve)
pve-kernel-5.4: 6.4-20
pve-kernel-helper: 6.4-20
pve-kernel-5.4.203-1-pve: 5.4.203-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
I have the header files for the 'installed' version (5.4) but not for the actual version.

What is a good way to proceed from here?
 
apt install pve-headers-4.15.18-12-pve to install the kernel headers for your running kernel version.
Or reboot the host into one of the newer kernels (and install those kernel headers, if needed) and try again.

PS: Because Proxmox 6 is out of support, a good way to proceed would be to upgrade to the latest Proxmox 7.2.
 
apt install pve-headers-4.15.18-12-pve to install the kernel headers for your running kernel version.
Or reboot the host into one of the newer kernels (and install those kernel headers, if needed) and try again.


PS: Because Proxmox 6 is out of support, a good way to proceed would be to upgrade to the latest Proxmox 7.2.
Yeah, I sadly can't get the 7.2 installer to work at the moment, seems to be a graphics driver issue.
I already have my grub updated but I can't get it to boot the correct kernel. I'm currently booted into 4.15.18-12-pve even though my only grub entries are:
Code:
update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.4.203-1-pve
Found initrd image: /boot/initrd.img-5.4.203-1-pve
Found linux image: /boot/vmlinuz-5.4.106-1-pve
Found initrd image: /boot/initrd.img-5.4.106-1-pve
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
Warning: os-prober will not be executed to detect other bootable partitions.
Systems on them will not be added to the GRUB boot configuration.
Check GRUB_DISABLE_OS_PROBER documentation entry.
done
I've rebooted multiple times but I can't get it to boot into the new kernel :c
 
Yeah, I sadly can't get the 7.2 installer to work at the moment, seems to be a graphics driver issue.
If you upgrade your installation (there is a How-To somewhere), you don't need the 7.2 installer. Or try the 7.1 installer for a fresh install.
I already have my grub updated but I can't get it to boot the correct kernel. I'm currently booted into 4.15.18-12-pve even though my only grub entries are:
Code:
update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.4.203-1-pve
Found initrd image: /boot/initrd.img-5.4.203-1-pve
Found linux image: /boot/vmlinuz-5.4.106-1-pve
Found initrd image: /boot/initrd.img-5.4.106-1-pve
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
Warning: os-prober will not be executed to detect other bootable partitions.
Systems on them will not be added to the GRUB boot configuration.
Check GRUB_DISABLE_OS_PROBER documentation entry.
done
I've rebooted multiple times but I can't get it to boot into the new kernel :c
That's strange. Does you system use systemd-boot instead of GRUB? What are the outputs of cat /proc/cmdline and cat /etc/kernel/cmdline? I think proxmox-boot-tool was still named pve-efiboot-tool in Proxmox 6. Maybe you need pve-efiboot-tool refresh instead of update-grub.
 
If you upgrade your installation (there is a How-To somewhere), you don't need the 7.2 installer. Or try the 7.1 installer for a fresh install.

That's strange. Does you system use systemd-boot instead of GRUB? What are the outputs of cat /proc/cmdline and cat /etc/kernel/cmdline? I think proxmox-boot-tool was still named pve-efiboot-tool in Proxmox 6. Maybe you need pve-efiboot-tool refresh instead of update-grub.
It looks like it's using that EFI variables are not supported, grub is used in BIOS/Legacy mode..
Interestingly enough this is my cmdline:
Code:
cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-4.15.18-12-pve root=/dev/mapper/pve-root ro quiet

cat: /etc/kernel/cmdline: No such file or directory

Proxmox boot tool does exist but I also tried what you suggested:
Code:
 pve-efiboot-tool refresh
Running hook script 'proxmox-auto-removal'..
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.

I did what you suggested and am waiting on a reboot now, I'll update once it's done and hopefully still working.
As further information, the hardware I am using is kind of old (HPE DL 360 G6 with a HP Smart Array p410i Raid controller, this is what I need the hpsa driver for, as it doesn't seem to be included in either 4.15 or 5.4)
 
Last edited:
Update: it is still running on the old kernel despite the pve-bootmanager refresh: Linux vm-01 4.15.18-12-pve #1 SMP PVE 4.15.18-35 (Wed, 13 Mar 2019 08:24:42 +0100) x86_64
 
It looks like it's using that EFI variables are not supported, grub is used in BIOS/Legacy mode..
Interestingly enough this is my cmdline:
Code:
cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-4.15.18-12-pve root=/dev/mapper/pve-root ro quiet

cat: /etc/kernel/cmdline: No such file or directory
Your system is actually using GRUB and not systemd-boot. That's fine but it's unclear why the changes by update-grub are not affecting your boot drive. I have no explanation yet...
Proxmox boot tool does exist but I also tried what you suggested:
Code:
 pve-efiboot-tool refresh
Running hook script 'proxmox-auto-removal'..
Running hook script 'zz-proxmox-boot'..
Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace..
No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync.
There is no point, as your system is not using UEFI to boot and you are not booting from a ZFS pool.
As further information, the hardware I am using is kind of old (HPE DL 360 G6 with a HP Smart Array p410i Raid controller, this is what I need the hpsa driver for, as it doesn't seem to be included in either 4.15 or 5.4)
I don't know anything about HP. Maybe we should not try too hard as this implies that you don't want to boot with a newer kernel? Or make sure to install pve-kernel-4.18 as well?

What is the output of lsblk -o NAME,TYPE,PARTTYPENAME,MOUNTPOINT?
 
Your system is actually using GRUB and not systemd-boot. That's fine but it's unclear why the changes by update-grub are not affecting your boot drive. I have no explanation yet...

There is no point, as your system is not using UEFI to boot and you are not booting from a ZFS pool.

I don't know anything about HP. Maybe we should not try too hard as this implies that you don't want to boot with a newer kernel? Or make sure to install pve-kernel-4.18 as well?

What is the output of lsblk -o NAME,TYPE,PARTTYPENAME,MOUNTPOINT?
I don't really care about the kernel version too much, but it would be nice to use a newer one.
The output to the command is down below. There is an issue with the PARTTYPENAME so I removed that (lsblk: unknown column: PARTTYPENAME,MOUNTPOINT)
Code:
lsblk -o NAME,TYPE,MOUNTPOINT
NAME                        TYPE MOUNTPOINT
sda                         disk
├─sda1                      part
├─sda2                      part
└─sda3                      part
  ├─pve--OLD--03CD316A-swap lvm 
  └─pve--OLD--03CD316A-root lvm 
sdb                         disk
├─sdb1                      part
├─sdb2                      part
└─sdb3                      part
  ├─pve-root                lvm  /
  └─pve-swap                lvm  [SWAP]
sdc                         disk
└─sdc1                      part
sr0                         rom
 
The output to the command is down below. There is an issue with the PARTTYPENAME so I removed that (lsblk: unknown column: PARTTYPENAME,MOUNTPOINT)
Code:
lsblk -o NAME,TYPE,MOUNTPOINT
NAME                        TYPE MOUNTPOINT
sda                         disk
├─sda1                      part
├─sda2                      part
└─sda3                      part
  ├─pve--OLD--03CD316A-swap lvm
  └─pve--OLD--03CD316A-root lvm
sdb                         disk
├─sdb1                      part
├─sdb2                      part
└─sdb3                      part
  ├─pve-root                lvm  /
  └─pve-swap                lvm  [SWAP]
sdc                         disk
└─sdc1                      part
sr0                         rom
I guess my lsblk is newer but it would be nice to know the type of the partitions. My guess at the moment is that your system is booting from sda, while update-grub is updating sdb or the other way around. Can you check in the system BIOS which drive is used and try to boot from the other one?
 
I guess my lsblk is newer but it would be nice to know the type of the partitions. My guess at the moment is that your system is booting from sda, while update-grub is updating sdb or the other way around. Can you check in the system BIOS which drive is used and try to boot from the other one?
I should be booting from sdb as that is a usb inside the server that has pve 6.4 installed. The pve version on sdaX is still pve 5.x so I doubt this is where It's booting from. I also made sure to add the usb as prio 1 in the bios. I can't check the bios at the moment as I'm not in the Datacenter atm :c
 
I should be booting from sdb
Maybe grub-install /dev/sdb (maybe followed by update-grub) might fix your issue. Or maybe update-grub is updating stuff on sda, which has no effect on your boot.
The pve version on sdaX is still pve 5.x so I doubt this is where It's booting from.
It could be that it boots from both partially. The initial part of GRUB could be in sda1 or sdb1 (or both) and /boot (with the never updated kernel file) could be sda2 or sdb2 and then it the (old) kernel starts from /dev/mapper/pve-root which is within sdb3.
I'm not dealing with such things often enough to be absolutely sure about which is the case on your system, sorry.

EDIT: Removing sda would remove a lot of the confusion but could also leave it unbootable (until your put it back), and I guess it won't be possible from a distance.
 
Last edited:
Maybe grub-install /dev/sdb (maybe followed by update-grub) might fix your issue. Or maybe update-grub is updating stuff on sda, which has no effect on your boot.

It could be that it boots from both partially. The initial part of GRUB could be in sda1 or sdb1 (or both) and /boot (with the never updated kernel file) could be sda2 or sdb2 and then it the (old) kernel starts from /dev/mapper/pve-root which is within sdb3.
I'm not dealing with such things often enough to be absolutely sure about which is the case on your system, sorry.

EDIT: Removing sda would remove a lot of the confusion but could also leave it unbootable (until your put it back), and I guess it won't be possible from a distance.
Yeah, it looks like sdb2 isn't mounted to /boot or /boot/efi hence no update on the actual partition is happening. I just tried to do that and it seems to have a problem with that, probably also explaining why it didn't auto mount in the first place... mount /dev/sdb2 /boot/efi mount: /boot/efi: wrong fs type, bad option, bad superblock on /dev/sdb2, missing codepage or helper program, or other error.
 
Maybe grub-install /dev/sdb (maybe followed by update-grub) might fix your issue. Or maybe update-grub is updating stuff on sda, which has no effect on your boot.

It could be that it boots from both partially. The initial part of GRUB could be in sda1 or sdb1 (or both) and /boot (with the never updated kernel file) could be sda2 or sdb2 and then it the (old) kernel starts from /dev/mapper/pve-root which is within sdb3.
I'm not dealing with such things often enough to be absolutely sure about which is the case on your system, sorry.

EDIT: Removing sda would remove a lot of the confusion but could also leave it unbootable (until your put it back), and I guess it won't be possible from a distance.
It seems you are correct, the server boots from the old boot folder and drive but uses the new mount point as the root partition?


Code:
\NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                           8:0    0   1.7T  0 disk
├─sda1                        8:1    0  1007K  0 part
├─sda2                        8:2    0   512M  0 part
└─sda3                        8:3    0   1.7T  0 part
  ├─pve--OLD--03CD316A-swap 253:0    0     8G  0 lvm 
  └─pve--OLD--03CD316A-root 253:1    0    96G  0 lvm 
sdb                           8:16   0 931.5G  0 disk
└─sdb1                        8:17   0 931.5G  0 part
sdc                           8:32   1  28.7G  0 disk
├─sdc1                        8:33   1  1007K  0 part
├─sdc2                        8:34   1   512M  0 part
└─sdc3                        8:35   1  28.1G  0 part
  ├─pve-root                253:2    0     7G  0 lvm  /
  └─pve-swap                253:3    0   3.5G  0 lvm  [SWAP]
sr0                          11:0    1  1024M  0 rom 
root@vm-01:~# mount /dev/
Display all 196 possibilities? (y or n)
root@vm-01:~# mount /dev/pve-OLD-03CD316A/
root  swap 
root@vm-01:~# mount /dev/pve-OLD-03CD316A/
root  swap 
root@vm-01:~# mount /dev/pve-OLD-03CD316A/root /mnt
root@vm-01:~# ls /mnt
bin  boot  dev  endlessh.log  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@vm-01:~# ls /mnt/boot
config-4.15.18-12-pve  grub                       memtest86+.bin            pve                        vmlinuz-4.15.18-12-pve
efi                    initrd.img-4.15.18-12-pve  memtest86+_multiboot.bin  System.map-4.15.18-12-pve
 
I was able to install pve 7.2 with some boot flags: intel_idle.max_cstate=0 intel_iommu=igfx_off. Closing thread
 
  • Like
Reactions: leesteken

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!