System unbootable - grub error: disk lvmid not found

Miquel Gual Torner

Renowned Member
Aug 31, 2016
15
6
68
44
Help me.
I did not turn on the proxmox server.
I have only been able to access by chroot from the systemrecuecd.
* Error Grub:
error disk 'lvmid/XyYOqp-wfj3-zsZC-s1d4-...' .
Entering rescue mode...
* Error install grub:
grub-probe --device /dev/pve/root
grub-probe: error: no s'ha trobat el disc «lvmid/XyYOqp-wfj3-zsZC-s1d4-jLw6-2XfG-kk1hmE/lbOS2h-dvdX-U18v-zubX-v5Cn-vjUQ-gzTvgi».
grub-install /dev/sda
S'està instal·lant per la plataforma x86_64-efi.
File descriptor 4 (/dev/sda2) leaked on vgs invocation. Parent PID 5764: grub-install.real
File descriptor 4 (/dev/sda2) leaked on vgs invocation. Parent PID 5764: grub-install.real
grub-install.real: error: no s'ha trobat el disc «lvmid/XyYOqp-wfj3-zsZC-s1d4-jLw6-2XfG-kk1hmE/lbOS2h-dvdX-U18v-zubX-v5Cn-vjUQ-gzTvgi».
* lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- <758,25g 19,05 1,16
root pve -wi-ao---- 96,00g
snap_vm-300-disk-0_moodle388 pve Vri---tz-k 10,00g data vm-300-disk-0
snap_vm-605-disk-0_snap20211010 pve Vri---tz-k 4,00g data vm-605-disk-0
snap_vm-605-disk-0_snap20211017 pve Vri---tz-k 4,00g data vm-605-disk-0
snap_vm-605-disk-0_snap20211024 pve Vri---tz-k 4,00g data vm-605-disk-0
snap_vm-605-disk-1_snap20211010 pve Vri---tz-k 2,00g data vm-605-disk-1
snap_vm-605-disk-1_snap20211017 pve Vri---tz-k 2,00g data vm-605-disk-1
snap_vm-605-disk-1_snap20211024 pve Vri---tz-k 2,00g data vm-605-disk-1
swap pve -wi-a----- 8,00g
vm-102-disk-0 pve Vwi-a-tz-- 20,00g data 20,49
vm-109-disk-0 pve Vwi-a-tz-- 30,00g data 25,53
vm-111-disk-0 pve Vwi-a-tz-- 20,00g data 22,21
vm-114-disk-0 pve Vwi-a-tz-- 50,00g data 17,91
vm-121-disk-0 pve Vwi-a-tz-- 20,00g data 73,71
vm-152-disk-0 pve Vwi-a-tz-- 20,00g data 95,54
vm-188-disk-0 pve Vwi-a-tz-- 10,00g data 10,87
vm-200-disk-0 pve Vwi-a-tz-- 20,00g data 13,56
vm-204-disk-0 pve Vwi-a-tz-- 50,00g data 5,73
vm-205-disk-0 pve Vwi-a-tz-- 8,00g data 91,16
vm-300-disk-0 pve Vwi-a-tz-- 10,00g data 40,58
vm-400-disk-0 pve Vwi-a-tz-- 8,00g data 24,00
vm-605-disk-0 pve Vwi-a-tz-- 4,00g data 98,36
vm-605-disk-1 pve Vwi-a-tz-- 2,00g data 98,16
vm-605-state-snap20211010 pve Vwi-a-tz-- <4,49g data 45,26
vm-605-state-snap20211017 pve Vwi-a-tz-- <4,49g data 45,31
vm-605-state-snap20211024 pve Vwi-a-tz-- <4,49g data 45,44
vm-906-disk-0 pve Vwi-a-tz-- 1,00g data 55,12
vm-9106-disk-0 pve Vwi-a-tz-- 10,00g data 31,93
vm-912-disk-0 pve Vwi-a-tz-- 30,00g data 98,00
vm-956-disk-0 pve Vwi-a-tz-- 1,91g data 27,34
vm-964-disk-0 pve Vwi-a-tz-- 19,07g data 69,15
* uname -a
Linux sysrescue 5.10.70-1-lts #1 SMP Thu, 30 Sep 2021 09:43:10 +0000 x86_64 GNU/Linux
* pveversion
pve-manager/7.0-13/7aa7e488 (running kernel: 5.10.70-1-lts)
 
After several tests I no longer get the error.
Now I need to start the normal grub menu.

open systemrescuecd usb :

e2fsck -ff /dev/pve/root
resize2fs /dev/pve/root 95G # size 96G - 1G
lvreduce -L -1G /dev/pve/root

modprobe efivarfs
mount /dev/mapper/pve-root /mnt
mount -t proc proc /mnt/proc
mount -t sysfs sys /mnt/sys
mount -o bind /dev /mnt/dev
mount -t devpts pts /mnt/dev/pts/
chroot /mnt
# run inside the chroot:
mount /dev/sda2 /boot/efi
mount -t efivarfs efivarfs /sys/firmware/efi/efivars

update-grub
grub-install /dev/sda
 
error disk 'lvmid/<PVID>/<LVID> not found' .
I've encountered exactly this error today. I shutdown the system properly then it didn't boot. It seems that it's related to some bug in Grub LVM parser as was suggested by @fabian here. In the linked Debian issue, some users found the workaround similar to what you did with lvreduce command.
The immediate workaround, if this problem occurs, would be to make another modification to the LVM configuration. Then run update-grub again.
So I booted Proxmox ISO on the affected system, ran lvextend and update-grub and it fixed the issue. After it booted successfully I ran lvreduce to revert the changes I made. Thanks for your suggestion.
 
I had the same problem and this forum page helped me a lot with recovering my lab environment again.
Really thankful for everyone who shared their fixes!!

I do still wonder if this is a bug in Grub, LVM, or in proxmox?
I'd like to prevent this from happening in the future. Is there an update which fixes this bug?
 
doing some more research i found this bug in Grub2 which is likely the cause of this.
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=987008

What is not clear to me if there is a package update available already.
For such a big impact i'd expect that there would be a fix sooner as it was reported in april 2021 by netapp

My proxmox shows that it has : grub2/stable 2.04-20 amd64
no update is available for grub2.for this package..

In the post Netapp engineers motioned they've rolled back the grub version to an earlier version. Wouldn't it be a good thing to integrate this in promox as well to ensure less users will be affected by this bug?
 
Last edited:
I do get the same problem yesterday and check the whole day on that. i boot over an xubuntu and mount the proper second partition 512mb, that was not showing the type :boot/efi. (as disapear) So run the command, update-grub and show the error msg as the UUid is not found.
So as having other partition, i can't increased size. or will it work ? Anyhow i did reduce, but perhaps a hicup or something is missing.. ?
resize2fs /dev/pve/root 26G //did fail and said on-line resizing required - not supported
lvreduce -L -1G /dev/pve/root //this one do work, reduce confirm by lvdisplay.

update-grub - reboot : i then see our beloved Proxmox grub menu. But like it hang totally at :
Loading Linux 5.11.22-5-pve ...
Loading initial ramdisk ...
-no ethernet nic , no nothing.
Then i found i miss the part being : grub-install /dev/sda , But problem is : when i reboot to the live xubuntu, the 4th command fail :
mount /dev/pve/root /media/RESCUE/ give error : wrong fs type, bad option, bad superblock , missing codepage...

i do: mke2fs -c /dev/nvme0n1p2 and found 2 superblocks. Then run e2fsck -b 98304 /dev/nvme0n1p2
reboot. Then check with :
sudo e2fsck -n /dev/nvme0n1p2 //it give clean.
mount /dev/pve/root /media/RESCUE/ //give again the same bug bad option, bad superblock...

Now when i reboot, i don't see proxmox anymore as boot disk. Did i just kill the system ?

Also on the bug part, as proxmox is install over an nvme, can it be related as a delay is needed in grub ? With :
vi /etc/default/grub // then add "rootdelay=1" in the line GRUB_CMDLINE_LINUX_DEFAULT
And i also see the latency of the nvme itself, in the liveusb i get 100000 at default_ps_max_latency_us. Is setting a default value could help to prevent the lvm boot failure too with .. : nvme_core.default_ps_max_latency_us= 200
 
  • Like
Reactions: mitspieler
I've been affected 2021/10/11, had found no info online, no time to investigate further, I was away from the server and with the help of a collegue I had to reinstall Proxmox, lose all the VM (wondering why Proxmox don't have something like a "reinstall the OS only", and Proxmox rescue mode did not help) and restore from (lucky very recent) backup, but it took several hours.
Really is a scaring problem, needs to be investigated by proxmox and at least patch grub for proxmox
 
This issue still appears to be present (I just 'enjoyed' fixing this.)

Bash:
root@proxmox:~# pveversion
pve-manager/7.2-7/d0dd0e85 (running kernel: 5.15.39-3-pve)
root@proxmox:~# grub-install --version
grub-install.real (GRUB) 2.04-20
 
Same issue, PVE & Grub version below.
Code:
❯ pveversion
pve-manager/7.2-7/d0dd0e85 (running kernel: 5.15.39-4-pve)

❯ grub-install --version
grub-install.real (GRUB) 2.04-20

I just booted up an Ubuntu Live ISO and ran the following command to extend the LVM size by 1GB.
Code:
❯ sudo lvextend -L +1G /dev/pve/root

Rebooted the system and Proxmox booted just fine. So I booted back into the Ubuntu Live ISO again and ran the following command to reduce the LVM size.
Code:
❯ sudo lvreduce -L -1G /dev/pve/root
 
I have the same issue.
Did lvextend -L +1G /dev/pve/root and lvreduce -L -1G /dev/pve/root and all was fixed. What was it?

Code:
# pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.3-3 (running version: 7.3-3/c3928077)
pve-kernel-5.15: 7.2-14
pve-kernel-helper: 7.2-14
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.15.64-1-pve: 5.15.64-1
pve-kernel-5.15.35-1-pve: 5.15.35-3
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-1
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.3.1-1
proxmox-backup-file-restore: 2.3.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-1
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.5-6
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-1
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1
 
Same Problem here, after latest update.
Bash:
root@proxmox:~# pveversion
pve-manager/7.3-6/723bb6ec (running kernel: 5.15.85-1-pve)
root@proxmox:~# grub-install --version
grub-install.real (GRUB) 2.06-3~deb11u5

Problem temporarily solved (boot from USB: Super-Grub2 - Enable LVM/Raid Support - Boot Proxmox) until the next reboot.
 
  • Like
Reactions: CanadaGuy
@Dark Angel [gEb] @openaspace

just had the same and just resolved it using hints from above. Use Super-Grub2 to boot:

https://www.reddit.com/r/Proxmox/comments/vy33ho/stuck_at_grub_rescue_after_an_update_and_reboot/

- boot from a Super_grub_disk2 rescue iso -> https://www.supergrubdisk.org/
- An orange colored menu appears
- Select: "Enable GRUB2's RAID and LVM support" [ENTER]
- Press ESC when finished to go back to the main menu
- Select: "Boot manually" [ENTER]
- Select: "Operating Systems" [ENTER]
- Scroll down to the second last option: "Linux /boot/vmlinuz-5.xx.xx-x-pve (lvm/pve-root)" [ENTER]
- Your system will reboot and boots into Proxmox VE as before.
- Inside Proxmox VE shell: update-grub to repair GRUB2

HOWEVER, the last step (update-grub) threw errors after booting about disks not found. I ran these two:

Code:
lvextend -L +1G /dev/pve/root

followed by

Code:
lvreduce -L -1G /dev/pve/root

and it fixed me up when I rebooted. Agree, unhappy camper! After booting, ran update-grub and no errors this time. Perhaps someone else can explain in more detail what the lvm commands actually achieve? These are precise operations, so I accepted the warning for lvreduce, though I was hesitant at first. I had backups so I wasn't too worried.
 
Last edited:
@Dark Angel [gEb] @openaspace

just had the same and just resolved it using hints from above. Use Super-Grub2 to boot:

https://www.reddit.com/r/Proxmox/comments/vy33ho/stuck_at_grub_rescue_after_an_update_and_reboot/

- boot from a Super_grub_disk2 rescue iso -> https://www.supergrubdisk.org/
- An orange colored menu appears
- Select: "Enable GRUB2's RAID and LVM support" [ENTER]
- Press ESC when finished to go back to the main menu
- Select: "Boot manually" [ENTER]
- Select: "Operating Systems" [ENTER]
- Scroll down to the second last option: "Linux /boot/vmlinuz-5.xx.xx-x-pve (lvm/pve-root)" [ENTER]
- Your system will reboot and boots into Proxmox VE as before.
- Inside Proxmox VE shell: update-grub to repair GRUB2

HOWEVER, the last step (update-grub) threw errors after booting about disks not found. I ran these two:

Code:
lvextend -L +1G /dev/pve/root

followed by

Code:
lvreduce -L -1G /dev/pve/root

and it fixed me up when I rebooted. Agree, unhappy camper! After booting, ran update-grub and no errors this time. Perhaps someone else can explain in more detail what the lvm commands actually achieve? These are precise operations, so I accepted the warning for lvreduce, though I was hesitant at first. I had backups so I wasn't too worried.
Thanks very much
Works perfect
 
This solution don't work for me, because:

Code:
#lvextend -L +1G /dev/pve/root
  Insufficient free space: 256 extents needed, but only 0 available
It won't work unless you have at least 1G empty space. You can try the opposite of that by first running
lvreduce -L -1G /dev/pve/root
then
lvextend -L +1G /dev/pve/root
but that can cause data loss if your drive is almost full or your data is scattered all over your drive.
 
Last edited:
  • Like
Reactions: melroy89

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!