Successful Upgrade from 7 to 8 EXCEPT for kernel

Goggles5503

New Member
Sep 11, 2022
11
0
1
Updated proxmox from 7 to 8 without issue...

after the final reboot, ran proxmox7to8..

“A suitable kernel (pve-kernel-6.2) is installed, but an unsuitable (5.15.30-2-pve) is booted, missing reboot?!”

Multiple reboots performed and full shutdowns. Tried to pin the new kernel with pve-efiboot-tool and proxmox-boot-tool.. both show new kernel pinned but new kernel doesn't boot

Any help is appreciated
 
When you say the new kernel doesn't boot or not automatically, what happens if you try to select the new one manually at boot time?

Please post:
Code:
proxmox-boot-tool kernel list
uname -rv
 
When you say the new kernel doesn't boot or not automatically, what happens if you try to select the new one manually at boot time?
When I hold shift at boot or go to advanced, the only option is the 5.15.30-2-pve

Please post:
proxmox-boot-tool kernel list
uname -rv

root@pve1:~# proxmox-boot-tool kernel list
uname -rv
Manually selected kernels:
None.

Automatically selected kernels:
5.15.108-1-pve
5.15.30-2-pve
6.2.16-3-pve

Pinned kernel:
6.2.16-3-pve
5.15.30-2-pve #1 SMP PVE 5.15.30-3 (Fri, 22 Apr 2022 18:08:27 +0200)
root@pve1:~#
 
How old is your initial PVE installation?
lvdisplay /dev/pve/root | grep time

To get more information about the current state, please post the output of the following commands...
apt list pve-kernel-* --installed

ls -1 /boot/v*

df -hT | grep -e Size -e boot

... and afterwards run ...
dpkg-reconfigure pve-kernel-6.2.16-3-pve

... which could fix your issue.
 
How old is your initial PVE installation?
LV Creation host, time proxmox, 2022-07-21 18:32:10 -0400


apt list pve-kernel-* --installed
pve-kernel-5.15.104-1-pve/now 5.15.104-2 amd64 [installed,local]
pve-kernel-5.15.108-1-pve/now 5.15.108-1 amd64 [installed,local]
pve-kernel-5.15.30-2-pve/now 5.15.30-3 amd64 [installed,local]
pve-kernel-5.15/now 7.4-4 all [installed,local]
pve-kernel-6.2.16-3-pve/stable,now 6.2.16-3 amd64 [installed,automatic]
pve-kernel-6.2/stable,now 8.0.2 all [installed,automatic]

ls -1 /boot/v*
/boot/vmlinuz-5.15.104-1-pve
/boot/vmlinuz-5.15.108-1-pve
/boot/vmlinuz-5.15.30-2-pve
/boot/vmlinuz-6.2.16-3-pve

df -hT | grep -e Size -e boot
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdd2 vfat 511M 328K 511M 1% /boot/efi
... which could fix your issue.
running command and rebooting. Will report back shortly
 
When I hold shift at boot or go to advanced, the only option is the 5.15.30-2-pve



root@pve1:~# proxmox-boot-tool kernel list
uname -rv
Manually selected kernels:
None.

Automatically selected kernels:
5.15.108-1-pve
5.15.30-2-pve
6.2.16-3-pve

Pinned kernel:
6.2.16-3-pve
5.15.30-2-pve #1 SMP PVE 5.15.30-3 (Fri, 22 Apr 2022 18:08:27 +0200)
root@pve1:~#

After you UNpin the 5.15 kernel and do a proxmox-boot-tool refresh,
what is the output of # proxmox-boot-tool kernel list

Maybe do a # apt reinstall pve-kernel-6.2
 
After you UNpin the 5.15 kernel and do a proxmox-boot-tool refresh,
what is the output of # proxmox-boot-tool kernel list
root@pve1:~# proxmox-boot-tool kernel list
Manually selected kernels:
None.

Automatically selected kernels:
5.15.108-1-pve
5.15.30-2-pve
6.2.16-3-pve

Pinned kernel:
6.2.16-3-pve
root@pve1:~#


Maybe do a # apt reinstall pve-kernel-6.2
ran successfully (in that there were no errors thrown). rebooted. still 5.x
 
it seems you booting from incorrect disk/partition.
What about output of lsblk & mount & cat /etc/fstab

edit : zfs or ext4 install ? efi or Legacy boot ?
 
Last edited:
yes, very likely you are not booting from the disk you (or proxmox-boot-tool) thinks you are booting from.. what does `proxmox-boot-tool status` say? what about `lsblk --exclude 230`?
 
root@pve1:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdd 8:48 1 745.2G 0 disk
├─sdd1 8:49 1 1007K 0 part
├─sdd2 8:50 1 512M 0 part /boot/efi
└─sdd3 8:51 1 744.7G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 6.3G 0 lvm
│ └─pve-data-tpool 253:9 0 612.2G 0 lvm
│ ├─pve-data 253:10 0 612.2G 1 lvm
│ ├─pve-vm--100--disk--0 253:11 0 200G 0 lvm
│ ├─pve-vm--500--disk--0 253:12 0 30G 0 lvm
│ ├─pve-vm--101--disk--0 253:13 0 4M 0 lvm
│ ├─pve-vm--101--disk--1 253:14 0 32G 0 lvm
│ ├─pve-vm--502--disk--0 253:15 0 32G 0 lvm
│ ├─pve-vm--504--disk--0 253:16 0 50G 0 lvm
│ ├─pve-vm--102--disk--0 253:17 0 32G 0 lvm
│ └─pve-vm--501--disk--0 253:18 0 30G 0 lvm
└─pve-data_tdata 253:3 0 612.2G 0 lvm
└─pve-data-tpool 253:9 0 612.2G 0 lvm
├─pve-data 253:10 0 612.2G 1 lvm
├─pve-vm--100--disk--0 253:11 0 200G 0 lvm
├─pve-vm--500--disk--0 253:12 0 30G 0 lvm
├─pve-vm--101--disk--0 253:13 0 4M 0 lvm
├─pve-vm--101--disk--1 253:14 0 32G 0 lvm
├─pve-vm--502--disk--0 253:15 0 32G 0 lvm
├─pve-vm--504--disk--0 253:16 0 50G 0 lvm
├─pve-vm--102--disk--0 253:17 0 32G 0 lvm
└─pve-vm--501--disk--0 253:18 0 30G 0 lvm

root@pve1:~# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=32847152k,nr_inodes=8211788,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=6576304k,mode=755,inode64)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14936)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
none on /run/credentials/systemd-sysusers.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
none on /run/credentials/systemd-tmpfiles-setup-dev.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
none on /run/credentials/systemd-sysctl.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
/dev/sdd2 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
none on /run/credentials/systemd-tmpfiles-setup.service type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=6576300k,nr_inodes=1644075,mode=700,inode64)
root@pve1:~#
cat /etc/fstab
root@pve1:~# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=408D-F643 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
root@pve1:~#


edit: forgot to answer this... ext4 and I think legacy boot though I can't remember for certain.
 
Last edited:
proxmox-boot-tool status
root@pve1:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
E: /etc/kernel/proxmox-boot-uuids does not exist.
root@pve1:~#


lsblk --exclude 230
root@pve1:~# lsblk --exclude 230
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdd 8:48 1 745.2G 0 disk
├─sdd1 8:49 1 1007K 0 part
├─sdd2 8:50 1 512M 0 part /boot/efi
└─sdd3 8:51 1 744.7G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 96G 0 lvm /
├─pve-data_tmeta 253:2 0 6.3G 0 lvm
│ └─pve-data-tpool 253:9 0 612.2G 0 lvm
│ ├─pve-data 253:10 0 612.2G 1 lvm
│ ├─pve-vm--100--disk--0 253:11 0 200G 0 lvm
│ ├─pve-vm--500--disk--0 253:12 0 30G 0 lvm
│ ├─pve-vm--101--disk--0 253:13 0 4M 0 lvm
│ ├─pve-vm--101--disk--1 253:14 0 32G 0 lvm
│ ├─pve-vm--502--disk--0 253:15 0 32G 0 lvm
│ ├─pve-vm--504--disk--0 253:16 0 50G 0 lvm
│ ├─pve-vm--102--disk--0 253:17 0 32G 0 lvm
│ └─pve-vm--501--disk--0 253:18 0 30G 0 lvm
└─pve-data_tdata 253:3 0 612.2G 0 lvm
└─pve-data-tpool 253:9 0 612.2G 0 lvm
├─pve-data 253:10 0 612.2G 1 lvm
├─pve-vm--100--disk--0 253:11 0 200G 0 lvm
├─pve-vm--500--disk--0 253:12 0 30G 0 lvm
├─pve-vm--101--disk--0 253:13 0 4M 0 lvm
├─pve-vm--101--disk--1 253:14 0 32G 0 lvm
├─pve-vm--502--disk--0 253:15 0 32G 0 lvm
├─pve-vm--504--disk--0 253:16 0 50G 0 lvm
├─pve-vm--102--disk--0 253:17 0 32G 0 lvm
└─pve-vm--501--disk--0 253:18 0 30G 0 lvm
root@pve1:~#
 
I think @_gabriel and @fabian are correct on it being the disk and I think I know why but not necessarily how to fix it.

A while back I was trying to pass through some HDD's to my TrueNAS VM. Regrettably, I passed through the wrong thing and somewhat briked my PVE box. On a whim, I switched the two SSD's SATA connections thinking perhaps I had passed the LSI controller. For whatever reason, PVE came back to life... I deleted the TrueNAS VM and went on my way. I don't know if I switched the SSD's back... All I know is my PVE box continued to chug along without issue and I rebuilt the trueNAS VM passing the correct card through.

To test your theory, I switched the drives around. PVE boots and in advanced options I have the 6.2 kernel... I can even boot from it. However, although it boots, I can't get the PVE web console to load.

I'm thinking it might be best for me to figure out which drive has the PVE install and which has the VM's and just format the PVE drive and start fresh. Please let me know if there is a better option.

Also, a tremendous thank you! to people who have helped on this.
 
I am pretty sure it is fixable, but it might require a bit more know how than you (currently) have.
 
I am pretty sure it is fixable, but it might require a bit more know how than you (currently) have.
I can confirm you are correct :)

Hypothetically, if disk 1 is the 'correct' boot disk with the pve installation and disk 2 has the LXC's and VMs, if I format disk 1 and reinstall PVE, will disk 2 be found and the LCX's and VM's still be there..?
 
not found automatically, but you can then re-add the storage and the disks should be visible again (whether that requires extra steps other than the entry in /etc/pve/storage.cfg depends on the storage ;)). the guest configs are stored on the / partition though (in a database that gets mounted on /etc/pve), so you either need to recreate that from scratch/memory, or a backup, if you have one.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!