Upgrade PVE 6.x to 7.x : Grub issues

YAGA

Renowned Member
Feb 15, 2016
76
8
73
58
Hi Proxmox team and Proxmox users,

As suggested by @fabian here is a new thread for this issue.

Congratulations to the team for the Proxmox 7 release.

I've upgraded a 4-node cluster with nvme ssd drive (nvme0n1) for filesystem and Ceph (sda, sdb) from the latest 6.x to 7.x.

Nodes are identical : same hardware, same Proxmox release, same configuration

The first 3-node upgrade were ok but for the last one (the primary node) I obtained several errors during the grub upgrade during the 6 to 7 dist-upgrade.

Hopefully the cluster works perfectly but on the primary node I always have an issue with the grub.

It seems that is a software related issue and not an hardware issue.

update-grub2 from a non-working node

nvme0n1

Code:
# update-grub2

Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.11.22-1-pve
Found initrd image: /boot/initrd.img-5.11.22-1-pve
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-XXXX...XXXX-wqrmZe/YhO3eq-XXXX...XXXX-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-XXXX...XXXX-wqrmZe/YhO3eq-XXXX...XXXX-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-XXXX...XXXX-wqrmZe/YhO3eq-XXXX...XXXX-ZpVbQa' not found.
Found linux image: /boot/vmlinuz-5.4.124-1-pve
Found initrd image: /boot/initrd.img-5.4.124-1-pve
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-XXXX...XXXX-wqrmZe/YhO3eq-XXXX...XXXX-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-XXXX...XXXX-wqrmZe/YhO3eq-XXXX...XXXX-ZpVbQa' not found.
Found linux image: /boot/vmlinuz-5.3.18-3-pve
Found initrd image: /boot/initrd.img-5.3.18-3-pve
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-XXXX...XXXX-wqrmZe/YhO3eq-XXXX...XXXX-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-XXXX...XXXX-wqrmZe/YhO3eq-XXXX...XXXX-ZpVbQa' not found.
Found linux image: /boot/vmlinuz-5.3.10-1-pve
Found initrd image: /boot/initrd.img-5.3.10-1-pve
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-XXXX...XXXX-wqrmZe/YhO3eq-XXXX...XXXX-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-XXXX...XXXX-wqrmZe/YhO3eq-XXXX...XXXX-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-XXXX...XXXX-wqrmZe/YhO3eq-XXXX...XXXX-ZpVbQa' not found.
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
Adding boot menu entry for EFI firmware configuration
done

XXXX...XXXX is just to reduce the string message

uname -a from a non-working node

Code:
# uname -a

Linux Cluster-STB-1 5.11.22-1-pve #1 SMP PVE 5.11.22-2 (Fri, 02 Jul 2021 16:22:45 +0200) x86_64 GNU/Linux

Message from @fabian
- details about your node's storage setup (especially regarding the / partition/filesystem - which filesystem, any hardware or software raid, ...)​
Filesystem in on nvme ssd without raid (lvm).
- contents of /etc/pve/storage.cfg​
- output of 'pvs', 'vgs', 'lvs' and 'lsblk' from a working and non-working node​

Information from a working node

Code:
# cat /etc/pve/storage.cfg
dir: local
        disable
        path /var/lib/vz
        content backup,vztmpl,iso
        maxfiles 3
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

rbd: Ceph-STB
        content images,rootdir
        krbd 0
        pool Ceph-STB

cephfs: Cephfs-STB
        path /mnt/pve/Cephfs-STB
        content backup,iso,vztmpl
        prune-backups keep-last=7

# pvs
  PV             VG                                        Fmt  Attr PSize  PFree
  /dev/nvme0n1p3 pve                                       lvm2 a--  <1.82t 16.37g
  /dev/sda       ceph-4f8eba3b-2842-41be-99c5-cc0f4c08e0c1 lvm2 a--  <3.64t     0
  /dev/sdb       ceph-27eea1dd-ce95-48a8-9df6-69e853171b0a lvm2 a--  <3.64t     0

# vgs
  VG                                        #PV #LV #SN Attr   VSize  VFree
  ceph-27eea1dd-ce95-48a8-9df6-69e853171b0a   1   1   0 wz--n- <3.64t     0
  ceph-4f8eba3b-2842-41be-99c5-cc0f4c08e0c1   1   1   0 wz--n- <3.64t     0
  pve                                         1  10   0 wz--n- <1.82t 16.37g

# lsblk
NAME                                                                                                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                     8:0    0  3.6T  0 disk
└─ceph--4f8eba3b--2842--41be--99c5--cc0f4c08e0c1-osd--block--6c773935--1191--4248--a5b2--a3ec683b81a5 253:14   0  3.6T  0 lvm
sdb                                                                                                     8:16   0  3.6T  0 disk
└─ceph--27eea1dd--ce95--48a8--9df6--69e853171b0a-osd--block--afc5544c--d799--447e--815f--abdda5e9b171 253:13   0  3.6T  0 lvm
sr0                                                                                                    11:0    1 1024M  0 rom
nvme0n1                                                                                               259:0    0  1.8T  0 disk
├─nvme0n1p1                                                                                           259:1    0 1007K  0 part
├─nvme0n1p2                                                                                           259:2    0  512M  0 part /boot/efi
└─nvme0n1p3                                                                                           259:3    0  1.8T  0 part
  ├─pve-swap                                                                                          253:0    0    8G  0 lvm  [SWAP]
  ├─pve-root                                                                                          253:1    0   96G  0 lvm  /
  ├─pve-data_tmeta                                                                                    253:2    0 15.8G  0 lvm
  │ └─pve-data-tpool                                                                                  253:4    0  1.7T  0 lvm
  │   ├─pve-data                                                                                      253:5    0  1.7T  1 lvm
  │   ├─pve-vm--220--disk--0                                                                          253:6    0    8G  0 lvm
  │   ├─pve-vm--220--disk--1                                                                          253:7    0    8G  0 lvm
  │   ├─pve-vm--223--disk--0                                                                          253:8    0    4G  0 lvm
  │   ├─pve-vm--51111--disk--0                                                                        253:9    0   20G  0 lvm
  │   ├─pve-vm--51111--cloudinit                                                                      253:10   0    4M  0 lvm
  │   ├─pve-vm--49032--disk--0                                                                        253:11   0   20G  0 lvm
  │   └─pve-vm--49032--cloudinit                                                                      253:12   0    4M  0 lvm
  └─pve-data_tdata                                                                                    253:3    0  1.7T  0 lvm
    └─pve-data-tpool                                                                                  253:4    0  1.7T  0 lvm
      ├─pve-data                                                                                      253:5    0  1.7T  1 lvm
      ├─pve-vm--220--disk--0                                                                          253:6    0    8G  0 lvm
      ├─pve-vm--220--disk--1                                                                          253:7    0    8G  0 lvm
      ├─pve-vm--223--disk--0                                                                          253:8    0    4G  0 lvm
      ├─pve-vm--51111--disk--0                                                                        253:9    0   20G  0 lvm
      ├─pve-vm--51111--cloudinit                                                                      253:10   0    4M  0 lvm
      ├─pve-vm--49032--disk--0                                                                        253:11   0   20G  0 lvm
      └─pve-vm--49032--cloudinit                                                                      253:12   0    4M  0 lvm

Information from a non-working node

Code:
# cat /etc/pve/storage.cfg
dir: local
        disable
        path /var/lib/vz
        content backup,vztmpl,iso
        maxfiles 3
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

rbd: Ceph-STB
        content images,rootdir
        krbd 0
        pool Ceph-STB

cephfs: Cephfs-STB
        path /mnt/pve/Cephfs-STB
        content backup,iso,vztmpl
        prune-backups keep-last=7

# pvs
  PV             VG                                        Fmt  Attr PSize  PFree
  /dev/nvme0n1p3 pve                                       lvm2 a--  <1.82t 16.37g
  /dev/sda       ceph-407c2264-d843-4a6d-bbe1-578d60f8ca8b lvm2 a--  <3.64t     0
  /dev/sdb       ceph-015f0e4a-1978-4b85-a13f-f37c4d3ffabb lvm2 a--  <3.64t     0

# vgs
  VG                                        #PV #LV #SN Attr   VSize  VFree
  ceph-015f0e4a-1978-4b85-a13f-f37c4d3ffabb   1   1   0 wz--n- <3.64t     0
  ceph-407c2264-d843-4a6d-bbe1-578d60f8ca8b   1   1   0 wz--n- <3.64t     0
  pve                                         1  29   0 wz--n- <1.82t 16.37g

# lsblk
NAME                                                                                                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                     8:0    0  3.6T  0 disk
└─ceph--407c2264--d843--4a6d--bbe1--578d60f8ca8b-osd--block--b702c7b8--3b1e--40a0--972f--d4583e3eb243 253:33   0  3.6T  0 lvm
sdb                                                                                                     8:16   0  3.6T  0 disk
└─ceph--015f0e4a--1978--4b85--a13f--f37c4d3ffabb-osd--block--f52bce18--3afb--4f11--b380--80ddcdd0b3ef 253:32   0  3.6T  0 lvm
nvme0n1                                                                                               259:0    0  1.8T  0 disk
├─nvme0n1p1                                                                                           259:1    0 1007K  0 part
├─nvme0n1p2                                                                                           259:2    0  512M  0 part /boot/efi
└─nvme0n1p3                                                                                           259:3    0  1.8T  0 part
  ├─pve-swap                                                                                          253:0    0    8G  0 lvm  [SWAP]
  ├─pve-root                                                                                          253:1    0   96G  0 lvm  /
  ├─pve-data_tmeta                                                                                    253:2    0 15.8G  0 lvm
  │ └─pve-data-tpool                                                                                  253:4    0  1.7T  0 lvm
  │   ├─pve-data                                                                                      253:5    0  1.7T  1 lvm
  │   ├─pve-vm--51102--disk--0                                                                        253:6    0   20G  0 lvm
  │   ├─pve-vm--51102--cloudinit                                                                      253:7    0    4M  0 lvm
  │   ├─pve-vm--48003--disk--0                                                                        253:8    0   40G  0 lvm
  │   ├─pve-vm--48004--disk--0                                                                        253:9    0   20G  0 lvm
  │   ├─pve-vm--48002--disk--0                                                                        253:10   0   10G  0 lvm
  │   ├─pve-vm--49002--disk--0                                                                        253:11   0   10G  0 lvm
  │   ├─pve-vm--50002--disk--0                                                                        253:12   0   10G  0 lvm
  │   ├─pve-vm--51002--disk--0                                                                        253:13   0   10G  0 lvm
  │   ├─pve-vm--52002--disk--0                                                                        253:14   0   10G  0 lvm
  │   ├─pve-vm--53002--disk--0                                                                        253:15   0   10G  0 lvm
  │   ├─pve-vm--54002--disk--0                                                                        253:16   0   10G  0 lvm
  │   ├─pve-vm--55002--disk--0                                                                        253:17   0   10G  0 lvm
  │   ├─pve-vm--48005--disk--0                                                                        253:18   0    4G  0 lvm
  │   ├─pve-vm--48005--cloudinit                                                                      253:19   0    4M  0 lvm
  │   ├─pve-vm--53010--disk--0                                                                        253:20   0   10G  0 lvm
......
......
  └─pve-data_tdata                                                                                    253:3    0  1.7T  0 lvm
    └─pve-data-tpool                                                                                  253:4    0  1.7T  0 lvm
      ├─pve-data                                                                                      253:5    0  1.7T  1 lvm
      ├─pve-vm--51102--disk--0                                                                        253:6    0   20G  0 lvm
      ├─pve-vm--51102--cloudinit                                                                      253:7    0    4M  0 lvm
      ├─pve-vm--48003--disk--0                                                                        253:8    0   40G  0 lvm
      ├─pve-vm--48004--disk--0                                                                        253:9    0   20G  0 lvm
......
......

Do you know how to fix this grub issue ?

Any advice welcome,

Kind regards,
YAGA
 
could you also post the output of
  • lvmconfig --configtype full devices/global_filter
  • cat /etc/default/grub /etc/default/grub.d/*
  • dpkg --list os-prober
once on a working and once on the node with the error?
 
could you also post the output of
  • lvmconfig --configtype full devices/global_filter
  • cat /etc/default/grub /etc/default/grub.d/*
  • dpkg --list os-prober
once on a working and once on the node with the error?

Hello Fabian,

Here is the info for a non-working node

Code:
# lvmconfig --typeconfig full devices/global_filter
global_filter="r|/dev/zd.*|"


# cat /etc/default/grub /etc/default/grub.d/*
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
#   info -f grub -n 'Simple configuration'

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX=""

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"
# Work around a bug in the obsolete init-select package which broke
# grub-mkconfig when init-select was removed but not purged.  This file does
# nothing and will be removed in a later release.
#
# See:
#   https://bugs.debian.org/858528
#   https://bugs.debian.org/863801
GRUB_DISTRIBUTOR="Proxmox VE"
GRUB_DISABLE_OS_PROBER=true


# dpkg --list os-prober
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name           Version      Architecture Description
+++-==============-============-============-===============================================
ii  os-prober      1.78         amd64        utility to detect other OSes on a set of drives

Here is the info for a working node

Code:
# lvmconfig --typeconfig full devices/global_filter
global_filter="r|/dev/zd.*|"


# cat /etc/default/grub /etc/default/grub.d/*
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
#   info -f grub -n 'Simple configuration'

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX=""

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"
# Work around a bug in the obsolete init-select package which broke
# grub-mkconfig when init-select was removed but not purged.  This file does
# nothing and will be removed in a later release.
#
# See:
#   https://bugs.debian.org/858528
#   https://bugs.debian.org/863801
GRUB_DISTRIBUTOR="Proxmox VE"
GRUB_DISABLE_OS_PROBER=true


# dpkg --list os-prober
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name           Version      Architecture Description
+++-==============-============-============-===============================================
ii  os-prober      1.78         amd64        utility to detect other OSes on a set of drives

It looks very similar

Cheers,
 
Perhaps a clue for the non-working node

VG UUID is OK

PV UUID looks different

@fabian please let me know what you think ?

YAGA

Code:
:~# pvs -a
  PV             VG                                        Fmt  Attr PSize  PFree
  /dev/nvme0n1                                                  ---      0      0
  /dev/nvme0n1p2                                                ---      0      0
  /dev/nvme0n1p3 pve                                       lvm2 a--  <1.82t 16.37g
  /dev/sda       ceph-407c2264-d843-4a6d-bbe1-578d60f8ca8b lvm2 a--  <3.64t     0
  /dev/sdb       ceph-015f0e4a-1978-4b85-a13f-f37c4d3ffabb lvm2 a--  <3.64t     0


:~# pvdisplay /dev/nvme0n1p3
  --- Physical volume ---
  PV Name               /dev/nvme0n1p3
  VG Name               pve
  PV Size               <1.82 TiB / not usable <4.07 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              476803
  Free PE               4191
  Allocated PE          472612
  PV UUID               QuqlcA-nRu0-6T6e-DGPA-LpuE-xE0g-T73dLg


:~# vgdisplay pve
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5453
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                29
  Open LV               28
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <1.82 TiB
  PE Size               4.00 MiB
  Total PE              476803
  Alloc PE / Size       472612 / 1.80 TiB
  Free  PE / Size       4191 / 16.37 GiB
  VG UUID               rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe


:~# update-grub2
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.11.22-1-pve
Found initrd image: /boot/initrd.img-5.11.22-1-pve
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
Found linux image: /boot/vmlinuz-5.4.124-1-pve
Found initrd image: /boot/initrd.img-5.4.124-1-pve
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
Found linux image: /boot/vmlinuz-5.3.18-3-pve
Found initrd image: /boot/initrd.img-5.3.18-3-pve
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
Found linux image: /boot/vmlinuz-5.3.10-1-pve
Found initrd image: /boot/initrd.img-5.3.10-1-pve
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
Adding boot menu entry for EFI firmware configuration
done
 
the 'lvs' output would also be interesting..
 
the 'lvs' output would also be interesting..

lvs output from the non-working node

Code:
~# lvs
  LV                                             VG                                        Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  osd-block-f52bce18-3afb-4f11-b380-80ddcdd0b3ef ceph-015f0e4a-1978-4b85-a13f-f37c4d3ffabb -wi-ao---- <3.64t
  osd-block-b702c7b8-3b1e-40a0-972f-d4583e3eb243 ceph-407c2264-d843-4a6d-bbe1-578d60f8ca8b -wi-ao---- <3.64t
  data                                           pve                                       twi-aotz--  1.67t             4.86   0.40
  root                                           pve                                       -wi-ao---- 96.00g
  swap                                           pve                                       -wi-ao----  8.00g
  vm-48002-disk-0                                pve                                       Vwi-aotz-- 10.00g data        20.27
  vm-48003-disk-0                                pve                                       Vwi-aotz-- 40.00g data        13.98
  vm-48004-disk-0                                pve                                       Vwi-aotz-- 20.00g data        15.01
  vm-48005-cloudinit                             pve                                       Vwi-aotz--  4.00m data        9.38
  vm-48005-disk-0                                pve                                       Vwi-aotz--  4.00g data        37.08
  vm-48050-cloudinit                             pve                                       Vwi-aotz--  4.00m data        9.38
  vm-48050-disk-0                                pve                                       Vwi-aotz-- 20.00g data        99.86
  vm-49002-disk-0                                pve                                       Vwi-aotz-- 10.00g data        19.60
  vm-49031-cloudinit                             pve                                       Vwi-aotz--  4.00m data        9.38
  vm-49031-disk-0                                pve                                       Vwi-aotz-- 20.00g data        11.92
  vm-50002-disk-0                                pve                                       Vwi-aotz-- 10.00g data        19.73
  vm-51002-disk-0                                pve                                       Vwi-aotz-- 10.00g data        19.53
  vm-51102-cloudinit                             pve                                       Vwi-aotz--  4.00m data        9.38
  vm-51102-disk-0                                pve                                       Vwi-aotz-- 20.00g data        72.15
  vm-51110-cloudinit                             pve                                       Vwi-aotz--  4.00m data        9.38
  vm-51110-disk-0                                pve                                       Vwi-aotz-- 20.00g data        43.09
  vm-51151-cloudinit                             pve                                       Vwi-aotz--  4.00m data        9.38
  vm-51151-disk-0                                pve                                       Vwi-aotz-- 10.00g data        69.87
  vm-52002-disk-0                                pve                                       Vwi-aotz-- 10.00g data        19.66
  vm-53002-disk-0                                pve                                       Vwi-aotz-- 10.00g data        19.63
  vm-53010-cloudinit                             pve                                       Vwi-aotz--  4.00m data        9.38
  vm-53010-disk-0                                pve                                       Vwi-aotz-- 10.00g data        26.15
  vm-53011-cloudinit                             pve                                       Vwi-aotz--  4.00m data        9.38
  vm-53011-disk-0                                pve                                       Vwi-aotz-- 10.00g data        22.16
  vm-54002-disk-0                                pve                                       Vwi-aotz-- 10.00g data        19.65
  vm-55002-disk-0                                pve                                       Vwi-aotz-- 10.00g data        19.68
 
and 'lvdisplay pve/root' ?
 
and 'lvdisplay pve/root' ?

Code:
:~# lvdisplay pve/root
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa
  LV Write Access        read/write
  LV Creation host, time proxmox, 2020-03-20 22:08:21 +0100
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

Code:
LV UUID  from   lvdisplay pve/root            YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa
LV UUID  from   update-grub2                  YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa
 
Last edited:
so that looks okay - what do grub-probe --target=device / and grub-probe --target=device /boot report?
 
so that looks okay - what do grub-probe --target=device / and grub-probe --target=device /boot report?

Same results on a non-working node and a working node

Code:
:~# grub-probe --target=device /
/dev/mapper/pve-root
:~# grub-probe --target=device /boot
/dev/mapper/pve-root
 
Probably found the origin of this issue.

On the non-working node, the /boot/grub/grub.cfg content is really different

A lot of info is missing on the non-working node

Extract for the non-working node

Code:
.....
.....
function load_video {
  if [ x$feature_all_video_module = xy ]; then
    insmod all_video
  else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
  fi
}

if loadfont unicode ; then
  set gfxmode=auto
  load_video
  insmod gfxterm
  set locale_dir=$prefix/locale
  set lang=en_US
  insmod gettext
fi
terminal_output gfxterm
if [ "${recordfail}" = 1 ] ; then
  set timeout=30
else
  if [ x$feature_timeout_style = xy ] ; then
    set timeout_style=menu
    set timeout=5
  # Fallback normal timeout code in case the timeout_style feature is
  # unavailable.
  else
    set timeout=5
  fi
fi
### END /etc/grub.d/00_header ###
.....
.....

Extract for a working node

Code:
.....
.....
function load_video {
  if [ x$feature_all_video_module = xy ]; then
    insmod all_video
  else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
  fi
}

if [ x$feature_default_font_path = xy ] ; then
   font=unicode
else
insmod part_gpt
insmod lvm
insmod ext2
set root='lvmid/BcQ5Ng-Piyl-x21E-j20Y-b3ad-1JD1-aiT1KH/H3W4qf-W9J4-vp8d-Fdqb-w2Br-J2im-UfRQVF'
if [ x$feature_platform_search_hint = xy ]; then
  search --no-floppy --fs-uuid --set=root --hint='lvmid/BcQ5Ng-Piyl-x21E-j20Y-b3ad-1JD1-aiT1KH/H3W4qf-W9J4-vp8d-Fdqb-w2Br-J2im-UfRQVF' f03c5a35-bfb8-4636-873c-4a1fa51b61a3
else
  search --no-floppy --fs-uuid --set=root f03c5a35-bfb8-4636-873c-4a1fa51b61a3
fi
    font="/usr/share/grub/unicode.pf2"
fi

if loadfont $font ; then
  set gfxmode=auto
  load_video
  insmod gfxterm
  set locale_dir=$prefix/locale
  set lang=en_US
  insmod gettext
fi
.....
.....

/boot/grub/grub.cfg filesize for the non-working mode is 50% approx. compared to a working-node.

The issue during 6.x to 7.x upgrade seems to be related to the /boot/grub/grub.cfg generation.

@fabian please let me know if I can do some tests for a better understanding of this issue ?

Regards
 
Hello,

I had exactly the same problem after my upgrade.
However, the error disappeared after I deleted the only existing snapshot.

Regards
 
Hello,

I had exactly the same problem after my upgrade.
However, the error disappeared after I deleted the only existing snapshot.

Regards

Hello,

Many thanks for your input, unfortunately I don't have any snapshot and I haven't found a solution.

Regards,
 
does re-running update-grub2 reproduce the error? do the following command produce any errors:

  • grub-probe --target=fs_uuid /dev/mapper/pve-root
  • grub-probe --target=partuuid /dev/mapper/pve-root
  • grub-probe --target=fs /dev/mapper/pve-root
 
does re-running update-grub2 reproduce the error? do the following command produce any errors:

  • grub-probe --target=fs_uuid /dev/mapper/pve-root
  • grub-probe --target=partuuid /dev/mapper/pve-root
  • grub-probe --target=fs /dev/mapper/pve-root


Yes, update-grub2 produces errors and grub-probe too.

Code:
:~# update-grub2
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.11.22-1-pve
Found initrd image: /boot/initrd.img-5.11.22-1-pve
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
Found linux image: /boot/vmlinuz-5.4.124-1-pve
Found initrd image: /boot/initrd.img-5.4.124-1-pve
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
Found linux image: /boot/vmlinuz-5.3.18-3-pve
Found initrd image: /boot/initrd.img-5.3.18-3-pve
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
Found linux image: /boot/vmlinuz-5.3.10-1-pve
Found initrd image: /boot/initrd.img-5.3.10-1-pve
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
/usr/sbin/grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
Adding boot menu entry for EFI firmware configuration
done

:~# grub-probe --target=fs_uuid /dev/mapper/pve-root
grub-probe: error: failed to get canonical path of `udev'.

:~# grub-probe --target=partuuid /dev/mapper/pve-root
grub-probe: error: failed to get canonical path of `udev'.

:~# grub-probe --target=fs /dev/mapper/pve-root
grub-probe: error: failed to get canonical path of `udev'.
 
ah sorry, those commands were missing a bit:

  • grub-probe --target=fs_uuid --device /dev/mapper/pve-root
  • grub-probe --target=partuuid --device /dev/mapper/pve-root
  • grub-probe --target=fs --device /dev/mapper/pve-root
 
ah sorry, those commands were missing a bit:

  • grub-probe --target=fs_uuid --device /dev/mapper/pve-root
  • grub-probe --target=partuuid --device /dev/mapper/pve-root
  • grub-probe --target=fs --device /dev/mapper/pve-root

:~# grub-probe --target=fs_uuid --device /dev/mapper/pve-root
grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
:~# grub-probe --target=partuuid --device /dev/mapper/pve-root
grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
:~# grub-probe --target=fs --device /dev/mapper/pve-root
grub-probe: error: disk `lvmid/rT5mdC-gcon-MA5O-93Gy-HPCr-ddtz-wqrmZe/YhO3eq-ISze-xMHV-OhqW-HUfe-fSzb-ZpVbQa' not found.
 
okay, we are getting closer.. can you try the same commands but add -v, and collect the output from all three on working and non working nodes? the output might be rather long, so maybe attach it here and indicate which output is from which node and command! thanks
 
okay, please re-run the --target=fs command on both with another -v added (so -vv).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!