[SOLVED] Proxmox 7.3-4 Grub upgrade problem

hregis

Well-Known Member
Feb 11, 2011
49
0
46
France
www.inodbox.com
Hello
I have a cluster of 3 Proxmox servers at OVH, on one of the servers I did an "apt upgrade" and I got this message during the upgrade:

Code:
Warning: os-prober will not be executed to detect other bootable partitions.
Systems on them will not be added to the GRUB boot configuration.
Check GRUB_DISABLE_OS_PROBER documentation entry.
Adding boot menu entry for UEFI Firmware Settings ...
done

I didn't really know what to do, I restarted the server and I got this error message when starting the server:

Code:
grub_disk_native_sectors not found

the disks are two SSDs in software RAID and LVM partitioning
can you help me please, thank you very much

below the "parted -l" command in OVH Rescue mode

Code:
root@rescue:~# parted -l
Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/vg-data: 921GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End    Size   File system  Flags
 1      0.00B  921GB  921GB  ext4


Model: Unknown (unknown)
Disk /dev/nvme0n1: 960GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name     Flags
 1      1049kB  537MB   536MB   fat16           primary  boot, esp
 2      537MB   22.0GB  21.5GB                  primary  raid
 3      22.0GB  39.2GB  17.2GB  linux-swap(v1)  primary
 4      39.2GB  960GB   921GB                   logical  raid
 5      960GB   960GB   2080kB                  logical


Error: /dev/md127: unrecognised disk label
Model: Linux Software RAID Array (md)
Disk /dev/md127: 921GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Model: Linux Software RAID Array (md)
Disk /dev/md126: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system  Flags
 1      0.00B  21.5GB  21.5GB  ext4


Model: Unknown (unknown)
Disk /dev/nvme1n1: 960GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system     Name     Flags
 1      1049kB  537MB   536MB   fat16           primary  boot, esp
 2      537MB   22.0GB  21.5GB                  primary  raid
 3      22.0GB  39.2GB  17.2GB  linux-swap(v1)  primary
 4      39.2GB  960GB   921GB                   logical  raid
 
Hello,
thanks, but i have this message when i start "upgrade-grub"

Code:
Warning: os-prober will not be executed to detect other bootable partitions.
Systems on them will not be added to the GRUB boot configuration.
Check GRUB_DISABLE_OS_PROBER documentation entry.
done
 
That's normal. os-prober is just a tool that detects other distros and Operating Systems for you.

https://packages.debian.org/sid/utils/os-prober

Important is that your linux image and initrd were found:
Code:
Found linux image: /boot/vmlinuz-5.19.0-1-pve
Found initrd image: /boot/initrd.img-5.19.0-1-pve
 
Last edited:
That's normal. os-prober is just a tool that detects other distros and Operating Systems for you.

https://github.com/campadrenalin/os-prober

Important is that your linux image and initrd were found:
Code:
Found linux image: /boot/vmlinuz-5.19.0-1-pve
Found initrd image: /boot/initrd.img-5.19.0-1-pve

no, this is after last apt upgrade (proxmox 7.3-4):

Code:
Found linux image: /boot/vmlinuz-5.15.83-1-pve
Found initrd image: /boot/initrd.img-5.15.83-1-pve
Found linux image: /boot/vmlinuz-5.15.64-1-pve
Found initrd image: /boot/initrd.img-5.15.64-1-pve
Found linux image: /boot/vmlinuz-5.15.39-3-pve
Found initrd image: /boot/initrd.img-5.15.39-3-pve
Found linux image: /boot/vmlinuz-5.4.195-1-pve
Found initrd image: /boot/initrd.img-5.4.195-1-pve
Found linux image: /boot/vmlinuz-4.19.0-18-cloud-amd64
Found initrd image: /boot/initrd.img-4.19.0-18-cloud-amd64
 
ok i resolved my problem with this procedure, this is for an OVH server with 2 disk SSD in soft RAID with LVM partition !

Code:
start your server in rescue mode (rescue64-pro)

mount /dev/md126 /mnt
mount -o bind /proc /mnt/proc
mount -o bind /sys /mnt/sys
mount -o bind /dev /mnt/dev
mount /dev/nvme0n1p1 /mnt/boot/efi

chroot /mnt /bin/bash
grub-install /dev/nvme0n1p1
update-grub
exit

umount /mnt/boot/efi
mount /dev/nvme1n1p1 /mnt/boot/efi

chroot /mnt /bin/bash
grub-install /dev/nvme1n1p1
update-grub
exit

remove the rescue mode and restart the server

Thanks all for your help
 
ok i resolved my problem with this procedure, this is for an OVH server with 2 disk SSD in soft RAID with LVM partition !

Code:
start your server in rescue mode (rescue64-pro)

mount /dev/md126 /mnt
mount -o bind /proc /mnt/proc
mount -o bind /sys /mnt/sys
mount -o bind /dev /mnt/dev
mount /dev/nvme0n1p1 /mnt/boot/efi

chroot /mnt /bin/bash
grub-install /dev/nvme0n1p1
update-grub
exit

umount /mnt/boot/efi
mount /dev/nvme1n1p1 /mnt/boot/efi

chroot /mnt /bin/bash
grub-install /dev/nvme1n1p1
update-grub
exit

remove the rescue mode and restart the server

Thanks all for your help
unfortunately this solution didn't work for me, the server won't start. Where can I see logs from rescue mode?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!