Proxmox update from 7.2-4 GRUB update failure

* what's the output of `proxmox-boot-tool status` ?
Code:
root@pve:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
85F8-7F41 is configured with: uefi (versions: 5.3.18-3-pve, 5.4.101-1-pve, 5.4.78-2-pve), grub (versions: 5.13.19-2-pve, 5.13.19-6-pve, 5.15.35-1-pve, 5.15.35-2-pve, 5.15.35-3-pve, 5.15.60-2-pve)
85FC-24FA is configured with: uefi (versions: 5.3.18-3-pve, 5.4.101-1-pve, 5.4.78-2-pve), grub (versions: 5.13.19-2-pve, 5.13.19-6-pve, 5.15.35-1-pve, 5.15.35-2-pve)
 
Ok - I'd probably really go with the reformat and reinit in that case - just follow the docs I linked
As always - make sure you have a working backup before!
 
I would suggest the following - mount each ESP listed in /etc/kernel/proxmox-boot-uuids somewhere and see where the diskspace is used:
* `mount /dev/disk-by-uuid/<UUID> /mnt/tmp` (the <UUID> is listed in /etc/kernel/proxmox-boot-uuids)
* du -smc /mnt/tmp/*

then clean it up - or resetup the esps using `proxmox-boot-tool format` and `proxmox-boot-tool init`
Thank you very much! It helps:
Code:
#get needed kernels:
root@pve:~#apt purge pve-kernel-5.13.19-6-pve
..
Copying and configuring kernels on /dev/disk/by-uuid/85F8-7F41
        Copying kernel 5.13.19-2-pve
        Copying kernel 5.15.35-2-pve
        Copying kernel 5.15.60-2-pve
cp: error writing '/var/tmp/espmounts/85F8-7F41/initrd.img-5.15.60-2-pve': No space left on device



root@pve:~# cat /etc/kernel/proxmox-boot-uuids
85F8-7F41
85FC-24FA

root@pve:~# blkid | grep UUID=
..
/dev/sda2: UUID="85F8-7F41" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="7acbabe3-53a8-4e45-9f3e-fbb1dc6ce83d"
..
/dev/sdb2: UUID="85FC-24FA" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="f44d6e69-1b5d-46c7-afbc-03d971bae2ba"
..


root@pve:~# mkdir /mnt/tmpa
root@pve:~# mkdir /mnt/tmpb

root@pve:~# mount /dev/disk/by-uuid/85F8-7F41 /mnt/tmpa
root@pve:~# mount /dev/disk/by-uuid/85FC-24FA /mnt/tmpb
root@pve:~# df -h /dev/sda2
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       511M  511M     0 100% /mnt/tmpa
root@pve:~# df -h /dev/sdb2
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb2       511M  446M   66M  88% /mnt/tmpb
root@pve:~# ls -l /mnt/tmpa
total 347040
drwxr-xr-x 5 root root     4096 Oct 19  2019 EFI
drwxr-xr-x 5 root root     4096 Jun 10 08:36 grub
-rwxr-xr-x 1 root root 59130868 Dec 29  2021 initrd.img-5.13.19-2-pve
-rwxr-xr-x 1 root root 59105420 May 24 12:35 initrd.img-5.13.19-6-pve
-rwxr-xr-x 1 root root 62786318 May 24 12:36 initrd.img-5.15.35-1-pve
-rwxr-xr-x 1 root root 62813569 Jun 10 08:36 initrd.img-5.15.35-2-pve
-rwxr-xr-x 1 root root 58789888 Oct 14 00:12 initrd.img-5.15.35-3-pve
drwxr-xr-x 3 root root     4096 Oct 19  2019 loader
-rwxr-xr-x 1 root root     2256 May 27  2020 NvVars
-rwxr-xr-x 1 root root 10047424 Nov 29  2021 vmlinuz-5.13.19-2-pve
-rwxr-xr-x 1 root root 10060768 Mar 29  2022 vmlinuz-5.13.19-6-pve
-rwxr-xr-x 1 root root 10866496 May 11 07:57 vmlinuz-5.15.35-1-pve
-rwxr-xr-x 1 root root 10865376 Jun  8 15:02 vmlinuz-5.15.35-2-pve
-rwxr-xr-x 1 root root 10867488 Jun 17 13:42 vmlinuz-5.15.35-3-pve
-rwxr-xr-x 1 root root        0 Oct 14 12:44 vmlinuz-5.15.60-2-pve
root@pve:~# ls -l /mnt/tmpb
total 279008
drwxr-xr-x 5 root root     4096 Oct 19  2019 EFI
drwxr-xr-x 5 root root     4096 Jun 10 08:37 grub
-rwxr-xr-x 1 root root 59130868 Dec 29  2021 initrd.img-5.13.19-2-pve
-rwxr-xr-x 1 root root 59105420 May 24 12:35 initrd.img-5.13.19-6-pve
-rwxr-xr-x 1 root root 62786318 May 24 12:36 initrd.img-5.15.35-1-pve
-rwxr-xr-x 1 root root 62813569 Jun 10 08:36 initrd.img-5.15.35-2-pve
drwxr-xr-x 3 root root     4096 Oct 19  2019 loader
-rwxr-xr-x 1 root root 10047424 Nov 29  2021 vmlinuz-5.13.19-2-pve
-rwxr-xr-x 1 root root 10060768 Mar 29  2022 vmlinuz-5.13.19-6-pve
-rwxr-xr-x 1 root root 10866496 May 11 07:57 vmlinuz-5.15.35-1-pve
-rwxr-xr-x 1 root root 10865376 Jun  8 15:02 vmlinuz-5.15.35-2-pve


#Delete all not needed kernels
root@pve:~# rm /mnt/tmpa/vmlinuz-5.15.35-3-pve
root@pve:~# rm /mnt/tmpa/initrd.img-5.15.35-3-pve
root@pve:~# rm /mnt/tmpa/initrd.img-5.15.35-1-pve
root@pve:~# rm /mnt/tmpa/vmlinuz-5.15.35-1-pve
root@pve:~# rm /mnt/tmpb/vmlinuz-5.15.35-1-pve
root@pve:~# rm /mnt/tmpb/initrd.img-5.15.35-1-pve
root@pve:~# rm /mnt/tmpb/vmlinuz-5.13.19-6-pve
root@pve:~# rm /mnt/tmpb/initrd.img-5.13.19-6-pve

# Now are 137/202M available
root@pve:~# df -h /dev/sda2
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       511M  375M  137M  74% /mnt/tmpa
root@pve:~# df -h /dev/sdb2
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb2       511M  310M  202M  61% /mnt/tmpb

# unmount
root@pve:~# umount /dev/disk/by-uuid/85FC-24FA
root@pve:~# umount /dev/disk/by-uuid/85F8-7F41

#purge kernel is successfull:
root@pve:~# apt purge pve-kernel-5.13.19-6-pve
..
The following packages will be REMOVED:
  pve-kernel-5.13* pve-kernel-5.13.19-6-pve* pve-kernel-5.15.35-3-pve
0 upgraded, 0 newly installed, 3 to remove and 0 not upgraded.
4 not fully installed or removed.
After this operation, 707 MB disk space will be freed.
Do you want to continue? [Y/n] Y
... some 100 lines ...
Copying and configuring kernels on /dev/disk/by-uuid/85FC-24FA
        Copying kernel 5.13.19-2-pve
        Copying kernel 5.15.35-2-pve
        Copying kernel 5.15.60-2-pve
        Copying kernel 5.4.157-1-pve
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.15.60-2-pve
Found initrd image: /boot/initrd.img-5.15.60-2-pve
Found linux image: /boot/vmlinuz-5.15.35-2-pve
Found initrd image: /boot/initrd.img-5.15.35-2-pve
Found linux image: /boot/vmlinuz-5.13.19-2-pve
Found initrd image: /boot/initrd.img-5.13.19-2-pve
Found linux image: /boot/vmlinuz-5.4.157-1-pve
Found initrd image: /boot/initrd.img-5.4.157-1-pve
Warning: os-prober will not be executed to detect other bootable partitions.
Systems on them will not be added to the GRUB boot configuration.
Check GRUB_DISABLE_OS_PROBER documentation entry.
done

# check:
root@pve:~# proxmox-boot-tool kernel list
Manually selected kernels:
None.
Automatically selected kernels:
5.13.19-2-pve
5.15.35-2-pve
5.15.60-2-pve
5.4.157-1-pve

And after reboot:
Code:
root@pve:~# proxmox-boot-tool kernel list
Manually selected kernels:
None.

Automatically selected kernels:
5.15.35-2-pve
5.15.60-2-pve
5.4.157-1-pve

root@pve:~# uname -r
5.15.60-2-pve
 
The lesson: Don't postpone too many reboots after kernel updates.

But: How do the VPS providers do it when they apply Proxmox updates? I had to shutdown all my container and VM's before the reboot.
 
Ok - I'd probably really go with the reformat and reinit in that case - just follow the docs I linked
As always - make sure you have a working backup before!
From the container and VM I have backups thanks to the integrated backup function of Proxmox.
How can I backup the host itself? Copy some folders from /etc/... to restore after a new install?
 
But: How do the VPS providers do it when they apply Proxmox updates?
Most often they run clusters, where you can (live-) migrate guests to another hosts while upgrading, then repeat with the next host.
 
  • Like
Reactions: Peter_
Hello guys, I'm a bit lost and hope someone can point me in the right direction. Inside proxmox I clicked on refresh and update, and after all updates I clicked on reboot. The server (dedicated server) wasn't coming up again, so I looked into it via a remote access and the console was showing: error symbol grub_disk_native_sectors not found - grub rescue.
Cannot tell you the last working Proxmox version, but a pretty new one. The last successful proxmox update was I think 2 months ago. Something like that.
I'm currently in rescue mode on the server, mounted all partitions and executed:
Bash:
lsblk -o NAME,MOUNTPOINT
NAME      MOUNTPOINT
sda      
├─sda1    [SWAP]
├─sda2  
│ └─md127 /mnt/md127
└─sda3  
  └─md126 /mnt/md126
sdb      
├─sdb1    [SWAP]
├─sdb2  
│ └─md127 /mnt/md127
└─sdb3  
  └─md126 /mnt/md126
loop0     /lib/live/mount/rootfs/img.current.squashfs

So when looking into md127, it show the following. So I guess I somehow have to fix/re-install grub in here?
Bash:
ls -al /mnt/md127/
total 421760
drwxr-xr-x 5 root root     4096 oct.  27 09:41 .
drwxr-xr-x 1 root root      100 oct.  27 18:32 ..
-rw-r--r-- 1 root root   236126 juin   9 23:37 config-5.10.0-15-amd64
-rw-r--r-- 1 root root   236134 juil. 24 00:32 config-5.10.0-16-amd64
-rw-r--r-- 1 root root   236275 oct.  21 22:24 config-5.10.0-19-amd64
-rw-r--r-- 1 root root   256718 mars  29  2022 config-5.13.19-6-pve
-rw-r--r-- 1 root root   260979 juin  22 17:22 config-5.15.39-1-pve
-rw-r--r-- 1 root root   261176 juil. 27 13:45 config-5.15.39-3-pve
-rw-r--r-- 1 root root   261134 oct.  13 10:30 config-5.15.64-1-pve
drwxr-xr-x 5 root root     4096 oct.  27 09:41 grub
-rw-r--r-- 1 root root 45018653 juin  15 22:07 initrd.img-5.10.0-15-amd64
-rw-r--r-- 1 root root 45027857 août   8 13:00 initrd.img-5.10.0-16-amd64
-rw-r--r-- 1 root root 42975122 oct.  27 09:41 initrd.img-5.10.0-19-amd64
-rw-r--r-- 1 root root 49693174 avril 28 21:37 initrd.img-5.13.19-6-pve
-rw-r--r-- 1 root root 53246261 juil.  4 23:32 initrd.img-5.15.39-1-pve
-rw-r--r-- 1 root root 54319951 août   8 13:01 initrd.img-5.15.39-3-pve
-rw-r--r-- 1 root root 51488528 oct.  27 09:41 initrd.img-5.15.64-1-pve
drwx------ 2 root root    16384 avril 14  2022 lost+found
drwxr-xr-x 2 root root     4096 oct.  27 09:40 pve
-rw-r--r-- 1 root root       83 juin   9 23:37 System.map-5.10.0-15-amd64
-rw-r--r-- 1 root root       83 juil. 24 00:32 System.map-5.10.0-16-amd64
-rw-r--r-- 1 root root       83 oct.  21 22:24 System.map-5.10.0-19-amd64
-rw-r--r-- 1 root root  5825184 mars  29  2022 System.map-5.13.19-6-pve
-rw-r--r-- 1 root root  6081285 juin  22 17:22 System.map-5.15.39-1-pve
-rw-r--r-- 1 root root  6083580 juil. 27 13:45 System.map-5.15.39-3-pve
-rw-r--r-- 1 root root  6081238 oct.  13 10:30 System.map-5.15.64-1-pve
-rw-r--r-- 1 root root  6851008 juin   9 23:37 vmlinuz-5.10.0-15-amd64
-rw-r--r-- 1 root root  6846656 juil. 24 00:32 vmlinuz-5.10.0-16-amd64
-rw-r--r-- 1 root root  6963648 oct.  21 22:24 vmlinuz-5.10.0-19-amd64
-rw-r--r-- 1 root root 10060768 mars  29  2022 vmlinuz-5.13.19-6-pve
-rw-r--r-- 1 root root 10867456 juin  22 17:22 vmlinuz-5.15.39-1-pve
-rw-r--r-- 1 root root 11290368 juil. 27 13:45 vmlinuz-5.15.39-3-pve
-rw-r--r-- 1 root root 11306432 oct.  13 10:30 vmlinuz-5.15.64-1-pve

Can someone advice me how to solve that error symbol grub_disk_native_sectors not found - grub rescue. error? Thanks a lot
 
Last edited:
Can someone advice me how to solve that error symbol grub_disk_native_sectors not found - grub rescue. error? Thanks a lot
On a hunch - since I did not run into this issue (and my usage of md-raid is kinda rusty) - try mounting all partitions of your proxmox install where they should be relatively - e.g. as a guess mount /dev/md127 below /mnt/md126/boot
* bindmount all relevant pseudofilesystems below /mnt/md126 (dev, sys, proc, ... - see https://pve.proxmox.com/wiki/Recover_From_Grub_Failure )
* chroot into /mnt/md126
* run `grub-install /dev/sda` and `grub-install /dev/sdb`
that should hopefully fix the issue
 
  • Like
Reactions: leonidas_o
thanks for the quick reply, just to make sure and double-check with you. I don't need the vgscan and the vgchange -ay commands?

So executing only the following:
Logged in as root

Bash:
mount /dev/md127 /mnt/md126/boot
mount -t proc proc /mnt/md126/proc
mount -t sysfs sys /mnt/md126/sys
mount -o bind /dev /mnt/md126/dev
mount -o bind /run /mnt/md126/run
chroot /mnt/md126
update-grub
grub-install /dev/sda
grub-install /dev/sdb

That's all? Do you see any mistakes?
 
Last edited:
  • Like
Reactions: mr.x
I don't need the vgscan and the vgchange -ay commands?
this is something I cannot tell you - since I don't know how the system was setup...
* did the installer you used setup a LVM somewhere (this is where you need the vgchange commands for)?

* on the other hand if /dev/md126 contains something that looks like a linux/proxmox system I think it's fair to assume that you do have / on lvm

so - I'd say - it looks ok from what I can see
 
  • Like
Reactions: leonidas_o
Hm, I used the default installer of the provider. Installed Debian and then installed proxmox like described in here: https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_11_Bullseye
So I haven't setup anything manually on the partitions.

This is how md126 looks like: ls -la /mnt/md126
Bash:
total 80
drwxr-xr-x 18 root root  4096 oct.  27 09:40 .
drwxr-xr-x  1 root root   100 oct.  27 18:32 ..
-rw-r--r--  1 root root     0 avril 16  2022 .autorelabel
lrwxrwxrwx  1 root root     7 avril 14  2022 bin -> usr/bin
drwxr-xr-x  2 root root  4096 avril 14  2022 boot
drwxr-xr-x  4 root root  4096 avril 14  2022 dev
drwxr-xr-x 98 root root  4096 oct.  27 09:41 etc
drwxr-xr-x  3 root root  4096 avril 14  2022 home
lrwxrwxrwx  1 root root    31 oct.  27 09:40 initrd.img -> boot/initrd.img-5.10.0-19-amd64
lrwxrwxrwx  1 root root    31 oct.  27 09:40 initrd.img.old -> boot/initrd.img-5.10.0-16-amd64
lrwxrwxrwx  1 root root     7 avril 14  2022 lib -> usr/lib
lrwxrwxrwx  1 root root     9 avril 14  2022 lib32 -> usr/lib32
lrwxrwxrwx  1 root root     9 avril 14  2022 lib64 -> usr/lib64
lrwxrwxrwx  1 root root    10 avril 14  2022 libx32 -> usr/libx32
drwx------  2 root root 16384 avril 14  2022 lost+found
drwxr-xr-x  2 root root  4096 avril 14  2022 media
drwxr-xr-x  4 root root  4096 mai   14 23:03 mnt
drwxr-xr-x  2 root root  4096 avril 14  2022 opt
drwxr-xr-x  2 root root  4096 mars  19  2022 proc
drwx------  5 root root  4096 oct.  24 21:45 root
drwxr-xr-x  2 root root  4096 avril 14  2022 run
lrwxrwxrwx  1 root root     8 avril 14  2022 sbin -> usr/sbin
drwxr-xr-x  2 root root  4096 avril 14  2022 srv
drwxr-xr-x  2 root root  4096 mars  19  2022 sys
drwxrwxrwt  7 root root  4096 oct.  27 09:42 tmp
drwxr-xr-x 14 root root  4096 avril 14  2022 usr
drwxr-xr-x 11 root root  4096 avril 14  2022 var
lrwxrwxrwx  1 root root    28 oct.  27 09:40 vmlinuz -> boot/vmlinuz-5.10.0-19-amd64
lrwxrwxrwx  1 root root    28 oct.  27 09:40 vmlinuz.old -> boot/vmlinuz-5.10.0-16-amd64


I executed some commands to check for LVM but doesn't look like it is used, at least that's how the current live rescue system behaves:
cat /mnt/md126/etc/fstab
Bash:
cat /mnt/md126/etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# systemd generates mount units based on this file, see systemd.mount(5).
# Please run 'systemctl daemon-reload' after making changes here.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/md1 during installation
UUID=a7... /               ext4    errors=remount-ro 0       1
# /boot was on /dev/md0 during installation
UUID=30... /boot           ext4    defaults        0       2
# swap was on /dev/sda1 during installation
UUID=b7... none            swap    sw              0       0
# swap was on /dev/sdb1 during installation
UUID=12... none            swap    sw              0       0

According to https://askubuntu.com/questions/202613/how-do-i-check-whether-i-am-using-lvm
If the line starts with UUID=xyz, this means it's a physical partition.

vgdisplay -v
Bash:
Using volume group(s) on command line.
No volume groups found.

And executing lvdisplay don't show anything.

I would conclude it is not used, so I don't need vgscan and vgchange and can execute the block of commands I posted above in Thread #49?
 
Last edited:
I'm getting an error on update-grub:
/usr/sbin/grub-probe: error: failed to get canonical path of `/dev/md126'.

Unfortunately, I had a copy paste error, so I bind mounted dev to run and then executed the other commands. Without unmounting anything. Could this cause that error?
Bash:
mount -o bind /dev /mnt/md126/run/
mount -o bind /dev /mnt/md126/dev
mount -o bind /run /mnt/md126/run
 
Last edited:
Unfortunately, I had a copy paste error, so I bind mounted dev to run and then executed the other commands. Without unmounting anything. Could this cause that error?
yes - could be
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!