PVE UEFI Boot health

jvanhambelgium

New Member
Mar 18, 2025
3
0
1
Hi,

I'm running 2 standalone PVE's and notice a difference in the UEFI boot config.
On 1 of the nodes I performed an apt-get upgrade and suddently it asked some stuff about grub and where I wanted to apply it.
I cancelled, but I'm not 100% confident on the "health" of the boot-sequence of both machine as there is a difference.

-> On the DC2-PVE node I did not yet performed the apt-get upgrade hence I still have the Boot0000* proxmox which is gone on the DC1-PVE
I'm a bit worried it would not boot properly, altough it looks like DC1-PVE would boot the exact same HD(2,GPT,6e9e8f34-188f-417c-b5e6-1edb97cf97ce,0x800,0x200000)/File(\EFI\BOOT\BOOTX64.EFI)..BO as the other one DC2-PVE

Any tips how I can "verify" the overall health of this and make sure it boots properly...
What is this "proxmox" extra entry on DC2-PVE compared to DC1-PVE and is it really needed ?


DC1-PVE
root@dc1-pve:~# efibootmgr -v
BootCurrent: 0004
Timeout: 1 seconds
BootOrder: 0004,0002,0001
Boot0001 Hard Drive BBS(HD,,0x0)/VenHw(5ce8128b-2cec-40f0-8372-80640e3dc858,0200)..GO..NO..........S.A.M.S.U.N.G. .M.Z.7.L.3.3.T.8.H.B.L.T.-.0.0.A.0.7...................\.,.@.r.d.=.X..........A...........................>..Gd-.;.A..MQ..L.6.S.R.E.T.N.X.0.0.1.2.2.5.2. . . . . . ........BO..NO........}.T.S.2.5.6.G.M.T.E.6.7.2.A...................\.,.@.r.d.=.X..........A..0.......................|5HRM........*..Gd-.;.A..MQ..L.I.8.9.0.4.8.0.0.5.7........BO..NO..........S.A.M.S.U.N.G. .M.Z.7.L.3.3.T.8.H.B.L.T.-.0.0.A.0.7...................\.,.@.r.d.=.X..........A...........................>..Gd-.;.A..MQ..L.6.S.R.E.T.N.X.0.0.1.2.2.2.2. . . . . . ........BO
Boot0002 UEFI: Built-in EFI Shell VenMedia(5023b95c-db26-429b-a648-bd47664c8012)..BO
Boot0004* UEFI OS HD(2,GPT,6e9e8f34-188f-417c-b5e6-1edb97cf97ce,0x800,0x200000)/File(\EFI\BOOT\BOOTX64.EFI)..BO
MirrorStatus: Platform does not support address range mirror
DesiredMirroredPercentageAbove4G: 0.00
DesiredMirrorMemoryBelow4GB: false
root@dc1-pve:~#

DC2-PVE
root@dc2-pve:~# efibootmgr -v
BootCurrent: 0000
Timeout: 1 seconds
BootOrder: 0000,0005,0002,0001
Boot0000* proxmox HD(2,GPT,6e9e8f34-188f-417c-b5e6-1edb97cf97ce,0x800,0x200000)/File(\EFI\proxmox\grubx64.efi)
Boot0001 Hard Drive BBS(HD,,0x0)/VenHw(5ce8128b-2cec-40f0-8372-80640e3dc858,0200)..GO..NO........}.T.S.2.5.6.G.M.T.E.6.7.2.A...................\.,.@.r.d.=.X..........A..0.......................|5HRM........*..Gd-.;.A..MQ..L.I.8.9.0.4.8.0.0.4.6........BO..NO..........S.A.M.S.U.N.G. .M.Z.7.L.3.3.T.8.H.B.L.T.-.0.0.A.0.7...................\.,.@.r.d.=.X..........A...........................>..Gd-.;.A..MQ..L.6.S.R.E.T.N.X.0.0.1.2.2.3.2. . . . . . ........BO..NO..........S.A.M.S.U.N.G. .M.Z.7.L.3.3.T.8.H.B.L.T.-.0.0.A.0.7...................\.,.@.r.d.=.X..........A...........................>..Gd-.;.A..MQ..L.6.S.R.E.T.N.X.0.0.1.2.2.6.2. . . . . . ........BO
Boot0002 UEFI: Built-in EFI Shell VenMedia(5023b95c-db26-429b-a648-bd47664c8012)..BO
Boot0005* UEFI OS HD(2,GPT,6e9e8f34-188f-417c-b5e6-1edb97cf97ce,0x800,0x200000)/File(\EFI\BOOT\BOOTX64.EFI)..BO
MirrorStatus: Platform does not support address range mirror
DesiredMirroredPercentageAbove4G: 0.00
DesiredMirrorMemoryBelow4GB: false
root@dc2-pve:~#

Thanks!
 
please post the /var/log/apt/history/term.log contents for that upgrade on the first node (and don't reboot until it's clear what's going on, or you might need to do manual recovery!)
 
I know my issue is more of a basic Linux Debian issue and not "proxmox" specific, but thanks for looking at it.
the terminal-log is quite long and it contain output from the terminal window so I attached it as a file.

When you replay to the point of the GRUB-install it claims the unique identifier of the disk was changed but would be strange.
The system is has a NVME-boot disk (/dev/nvme0) and then 2*3TB data disks (/dev/sda & /dev/sdb)
There is also a /dev/dm-1

LVM sits on top of the NVME

root@DC1-pve:~# pvdisplay -v
--- Physical volume ---
PV Name /dev/nvme0n1p3
VG Name pve
PV Size 237.47 GiB / not usable <1.32 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 60793
Free PE 4097
Allocated PE 56696
PV UUID RojqgD-1zb5-z15i-Z5qb-Iq5b-bijT-2ZAQjU

"lsblk -l" shows me

pve-swap 252:0 0 8G 0 lvm [SWAP]
pve-root 252:1 0 69.4G 0 lvm /
pve-data_tmeta 252:2 0 1.4G 0 lvm
pve-data_tdata 252:3 0 141.2G 0 lvm
pve-data 252:4 0 141.2G 0 lvm
nvme0n1 259:0 0 238.5G 0 disk
nvme0n1p1 259:1 0 1007K 0 part
nvme0n1p2 259:2 0 1G 0 part /boot/efi
nvme0n1p3 259:3 0 237.5G 0 part

..and then a lenghty output for "vgdisplay -v"


root@DC1-pve:~# vgdisplay -v
--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 237.47 GiB
PE Size 4.00 MiB
Total PE 60793
Alloc PE / Size 56696 / <221.47 GiB
Free PE / Size 4097 / 16.00 GiB
VG UUID smbLfi-hPG5-pMCB-IWoe-RcVd-jEag-iZbsZe

--- Logical volume ---
LV Name data
VG Name pve
LV UUID jHgckm-NEmd-weqw-2Me9-I9o0-PNps-2DuQuT
LV Write Access read/write
LV Creation host, time proxmox, 2024-12-16 15:01:58 +0100
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 0
LV Size <141.23 GiB
Allocated pool data 0.00%
Allocated metadata 1.13%
Current LE 36154
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:4

--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID vJe0kg-eAgZ-86t2-9XuH-vARA-CW9b-3Ix01K
LV Write Access read/write
LV Creation host, time proxmox, 2024-12-16 15:01:56 +0100
LV Status available
# open 2
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID mRVUCG-C3rZ-Kk5e-l6A2-9nh3-WdRv-FOQCC2
LV Write Access read/write
LV Creation host, time proxmox, 2024-12-16 15:01:56 +0100
LV Status available
# open 1
LV Size <69.37 GiB
Current LE 17758
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:1

--- Physical volumes ---
PV Name /dev/nvme0n1p3
PV UUID RojqgD-1zb5-z15i-Z5qb-Iq5b-bijT-2ZAQjU
PV Status allocatable
Total PE / Free PE 60793 / 4097
 

Attachments

I've compared the /boot/grub/grub.cfg files on both servers and they are identical (performed a "diff" , no output)
The contents of the /boot are also identical, except that the grub.cfg was touched during my update hence the updated timestamp.

root@DC1-pve:/boot/grub# ls -la
total 2408
drwxr-xr-x 6 root root 4096 Mar 8 09:18 .
drwxr-xr-x 5 root root 4096 Mar 16 23:00 ..
drwxr-xr-x 2 root root 4096 Dec 16 15:03 fonts
-rw------- 1 root root 13307 Mar 8 09:18 grub.cfg
-rw-r--r-- 1 root root 1024 Dec 16 15:03 grubenv
drwxr-xr-x 2 root root 20480 Mar 8 09:18 i386-pc
drwxr-xr-x 2 root root 4096 Mar 8 09:18 locale
-rw-r--r-- 1 root root 2392304 Mar 8 09:18 unicode.pf2
drwxr-xr-x 2 root root 12288 Dec 16 15:03 x86_64-efi
root@DC1-pve:/boot/grub#


root@DC2-pve:/boot/grub# ls -la
total 2412
drwxr-xr-x 6 root root 4096 Mar 16 23:21 .
drwxr-xr-x 5 root root 4096 Mar 16 23:21 ..
drwxr-xr-x 2 root root 4096 Dec 16 15:03 fonts
-rw------- 1 root root 13307 Mar 16 23:21 grub.cfg
-rw-r--r-- 1 root root 1024 Dec 16 15:03 grubenv
drwxr-xr-x 2 root root 24576 Mar 16 23:21 i386-pc
drwxr-xr-x 2 root root 4096 Mar 16 23:21 locale
-rw-r--r-- 1 root root 2392304 Mar 16 23:21 unicode.pf2
drwxr-xr-x 2 root root 12288 Dec 16 15:03 x86_64-efi
root@DC2-pve:/boot/grub#
 
did you ever clone the disks of these systems from one to the other?

in any case, you have the wrong grub packages installed, please try the following:

Code:
apt update
echo 'grub-efi-amd64 grub2/force_efi_extra_removable boolean true' | debconf-set-selections -v -u
apt install grub-efi-amd64
efibootmgr -v

and post the output here..