Update Proxmox results in grub question

nontijt

New Member
Sep 11, 2022
11
2
3
Hi,
Just now I did a refresh in the update tab of Proxmox 7.2.7 and pressed update.

Now half way I get the following messages:
2022-09-11 00_14_06-pve - Proxmox Console.png

2022-09-11 00_16_16-pve - Proxmox Console.png

Weird part is, I have Proxmox running on ZFS mirror
2x SSD 1tb, as described in below picture
2022-09-11 00_06_03-pve - Proxmox Virtual Environment.png
SDA + SDB are combined the zfs mirror. one of them is boot in bios.


2022-09-10 23_59_37-pve - Proxmox Virtual Environment.png

I am at a loss, I am kind of worried I will mess up the data on the ZFS mirrors selecting GRUB to install on both SSD's (which make up the ZFS rpool)
what to do next, how should I proceed?

When I do not select any disk, I get this response:


2022-09-11 00_01_01-pve - Proxmox Virtual Environment.png

So I pressed no again, and I am waiting in the disk selection screen.

I can still open shell. I did
proxmox-boot-tool status
This is the result
proxmox-boot-tool status.png


Kindly your advice
 
Last edited:
I stepped in to Proxmox from version 5 or something. So not 6.4+.

I am just adding information here below, so that once someone reads it, he/she could be more effective to the culprit. :)

Following this info:
https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool Thanks @janssensm for the links in the other thread.

1. Check if root is on ZFS
root@pve:~# findmnt / TARGET SOURCE FSTYPE OPTIONS / rpool/ROOT/pve-1 zfs rw,relatime,xattr,noacl

So, root is on ZFS.

2. Check which bootloader is used
root@pve:~# ls /sys/firmware/efi config_table esrt fw_vendor runtime-map vars efivars fw_platform_size runtime systab

3. Finding potential ESPs
root@pve:~# lsblk -o +FSTYPE
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT FSTYPE
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part vfat
└─sda3 8:3 0 931G 0 part zfs_member
sdb 8:16 0 931.5G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 512M 0 part vfat
└─sdb3 8:19 0 931G 0 part zfs_member
zd0 230:0 0 1M 0 disk
zd16 230:16 0 64G 0 disk
├─zd16p1 230:17 0 200M 0 part vfat
└─zd16p2 230:18 0 63.8G 0 part apfs
zd32 230:32 0 32G 0 disk
├─zd32p1 230:33 0 32M 0 part vfat
├─zd32p2 230:34 0 24M 0 part ext4
├─zd32p3 230:35 0 256M 0 part squashfs
├─zd32p4 230:36 0 24M 0 part
├─zd32p5 230:37 0 256M 0 part
├─zd32p6 230:38 0 8M 0 part
├─zd32p7 230:39 0 96M 0 part ext4
└─zd32p8 230:40 0 31.3G 0 part ext4
zd48 230:48 0 42G 0 disk
├─zd48p1 230:49 0 8G 0 part ext4
├─zd48p2 230:50 0 1K 0 part
└─zd48p5 230:53 0 8G 0 part
zd64 230:64 0 1M 0 disk
zd80 230:80 0 45G 0 disk
├─zd80p1 230:81 0 32M 0 part vfat
├─zd80p2 230:82 0 24M 0 part ext4
├─zd80p3 230:83 0 256M 0 part squashfs
├─zd80p4 230:84 0 24M 0 part ext4
├─zd80p5 230:85 0 256M 0 part squashfs
├─zd80p6 230:86 0 8M 0 part
├─zd80p7 230:87 0 96M 0 part ext4
└─zd80p8 230:88 0 44.3G 0 part ext4
zd96 230:96 0 42G 0 disk
└─zd96p1 230:97 0 42G 0 part ext4
zd112 230:112 0 1M 0 disk
nvme0n1 259:0 0 931.5G 0 disk
├─nvme0n1p1 259:1 0 372.6G 0 part /mnt/nvme_storage1 ext4
└─nvme0n1p2 259:2 0 558.9G 0 part /mnt/nvme_storage2 ext4

I never did, but I assume from the above text I CAN switch to proxmox-boot-tool
 
In the docs I found the following command to determine which Bootloader is Used
efibootmgr -v

Response is below
efibootmgr -v.png
 
To finish this thread. It is solved now.

See this post, >>> click <<< and possibly others around it in this thread for how I solved it. :)

Happy camper again.


1 question for me remains. @janssensm maybe?
Is my system now set up correctly in relation to Proxmox on ZFS mirror and the bootloader? systemd or legacy?

If this is still incorrect I can probably bump into issues at a later moment in time. Id like to fix this then :)

Anyway: many thanks to all of you guys and girls for this awesome support and the forum users as well !!!
 
Last edited:
  • Like
Reactions: Stoiko Ivanov