Weird boot configuration on pve 8.2.7 - how does this even work, can I fix it?

Tomte

Member
Apr 20, 2021
3
0
6
49
Hello dear proxmox users and proxmox experts,

I recently upgraded my cluster of from proxmox 7.x to 8.2.7. This went mostly just fine.

I also installed a new pve system (not yet in the cluster) and noticed that it now defaults
to using systemd for booting if you install the root fs on zfs, which is the case in all
my nodes.

So out of curiosity I checked the other nodes, these however do report sth like this:

root@pve-03:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
036C-82E1 is configured with: grub (versions: 5.15.158-2-pve, 6.8.12-2-pve)
C4A3-4C25 is configured with: grub (versions: 5.15.158-2-pve, 6.8.12-2-pve)

OK, so I thought since these systems have been installed earlier (I cannot remember
if it was pve 7.x or even earlier) the system apparently kept the old setup. Fine.

However, on one of my systems, pve-04 I just get this:

root@pve-04:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
E: /etc/kernel/proxmox-boot-uuids does not exist.


The partitions on the boot disks look like this:

root@pve-04:~# fdisk -l /dev/sda
Disk /dev/sda: 894.25 GiB, 960197124096 bytes, 1875385008 sectors
Disk model: MZ7LH960HBJR0D3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 6242A8D4-8014-4A82-B821-D9E8717355B5

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 104857600 103806977 49.5G Solaris /usr & Apple ZFS
/dev/sda4 104859648 209717247 104857600 50G Solaris /usr & Apple ZFS
/dev/sda5 209717248 1875384974 1665667727 794.3G Solaris /usr & Apple ZFS

Partition 1 does not start on physical sector boundary.

(/dev/sdb is similar).

I used the option to only use 50GB of the HD space during installation, the installer
then created the first 3 partitions. sda3 and sdb3 is the "rpool", sd[ab]4 and 5 are
additional pools I added later, manually.


I tried mounting the /dev/sda2 and /dev/sdb2 efi partitions to check what might be the issue
but only get this:

root@pve-04:/etc/default# mount /dev/sda2 /mnt/
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/sda2, missing codepage or helper program, or other error.
dmesg(1) may have more information after failed mount system call.

In dmesg I find this for sda2 (but not sdb2):

[Fri Oct 4 10:34:38 2024] FAT-fs (sda2): bogus number of reserved sectors
[Fri Oct 4 10:34:38 2024] FAT-fs (sda2): Can't find a valid FAT filesystem

(note the /dev/sda1 and sdb1 "BIOS boot" can also not be mounted, also not on my other systems but I can not
remember if they are supposed to).

So, my questions:

1) How is this system actually booting. It is running after all, so I must have done at least the reboot after the pve7to8 migration.
Actually checking with journalctl --list-boots I see multiple reboots on the day of the upgrade:


-3 54c5a2ca0c60461bacc91ee1bea5c3ec Fri 2024-09-27 11:23:12 CEST Fri 2024-09-27 11:56:16 CEST
-2 f780dac1ed2e416fbf23f18465ed2c6f Fri 2024-09-27 11:58:50 CEST Fri 2024-09-27 12:26:45 CEST
-1 e1512202a4ac490ca3ecdfc1d8d8bd95 Fri 2024-09-27 12:29:24 CEST Fri 2024-09-27 12:59:32 CEST
0 d727b1cf7fde4741a6a33e7205b30ad2 Fri 2024-09-27 13:02:52 CEST Fri 2024-10-04 11:39:05 CEST

Journalctl -b then tells me that the previous boot (-1) was also already done with the 6.8.12-2-pve kernel so it was
the reboot after the "apt dist-upgrade"

It may be that the last reboot was triggered because of a BIOS update I did on the system via iDRAC.


2) If I try to reboot now, will it actually work?

3) Can I / should I fix this using proxmox-boot-tool <init> or <reinit>?

I can remove all VMs from the host before rebooting, if necessary I can also do a reinstall. But I would like to avoid the reinstall if possible as it is quite a bit of work.

Many thanks for any answers,

Thomas
 
Wait, I just realized that a normal grub config on legacy systems only uses the reserved space on the first partition to store it's binary. so it is normal that this cannot be mounted.
 
After further reading, it seems to me that
a) my pve-system seems to use "plain grub"
b) the other systems are one step ahead it seems, they seem to be using something called "GRUB through proxmox-boot-tool", which is also why "proxmox-boot-tool status" gives a meaningful answer on these systems.

The reason for this is probably because the other systems have been reinstalled somewhere in between (5.x or 6.x) whereas pve-04 had just been upgraded.

I guess I also got confused a bit because "proxmox-boot-tool" is sometimes used as a general frontend even for grub, ie "kernel list" prints something and "kernel pin" is supposed to work.

Anyway, would it be a good idea to "upgrade" to "GRUB through proxmox-boot-tool" on pve-04 and can this be done using "proxmox-boot-tool init /dev/sda2 grub"?
 

Attachments

  • 1728050103936.png
    1728050103936.png
    882 bytes · Views: 2

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!