Error: disk 'lvmid/***' not found, grub rescue.

anoo222

Member
Feb 21, 2023
41
5
8
Hello,

Yesterday i experienced a sudden powerloss, my proxmox node reboots automatically when the power returns.
When the power returned, the node rebooted but i wasn't abte to connect or even ping to it, but it was on.
Attached a screen to it and got this output;

Code:
Error: disk 'lvmid/***' not found.
grub rescue>

I've did some research on this and booted proxmox debug mode with a usb.
I've tried these steps;

Code:
vgscan
    found volume group "pve" using metadata type lvm2
vgchange -ay
    18 logical volume(s) in volume group "pve" now active
mkdir /media/RESCUE
mount /dev/pve/root /media/RESCUE
    EXT4-fs (dm-1): recovery complete
    EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
mount /dev/sdb2 /media/RESCUE/boot
    FAT-fs (sdb2): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
fsck /dev/sdb2
    There are differences between boot sector and its backup.
    This is mostly harmless. Differences: (offset:original/backup)
    1) Copy original to backup
    2) Copy backup to original (I choose this option, hope i didn't f*ck things up more)
    3) No action
    ->Wrote changes
    /dev/sdb2: 9 files, 89/138811 clusters
mount /dev/sdb2 /media/RESCUE/boot
mount -t proc proc /media/RESCUE/proc/
mount -t sysfs sys /media/RESCUE/sys/
mount -o bind /dev /media/RESCUE/dev/
mount -o bind /run /media/RESCUE/run
chroot /media/RESCUE/
update-grub
    /sbin/grub-mkconfig: 278: cannot create /boot/grub/grub.cfg.new: Directory nonexistent

when i ls into /media/RESCUE/boot, there indeed is no grub directory.
There is an EFI directory
when i ls into EFI, there are 3 more directories, DELL, proxmox, and one i forgot out of my head now.
when i ls into proxmox, there was a grubx64.EFI or somthing similar.

Note this is a homeserver on a dell optiplex with a consumer ssd.
Can anybody please help me to restore from this issue.

Thanks
 
Update, problem seems to be solved.

I've booted proxmox debug from a usb and executed this command:

Code:
lvextend -L +1G /dev/pve/root

I rebooted the node without usb, and magically my proxmox node reboots normal & working without the grub rescue screen.

I undid the above command with

Code:
lvreduce -L -1G /dev/pve/root

rebooted, and proxmox still starts normal & working.

I like to pretend i know what i'm doing, but i'm unsure what caused this issue & how above commands was able to fix it.

Maybe someone could explain?

I've made a backup once manually from my node several weeks ago.
Since, hours of work went in which i didn't backup.

Atleast lesson learned now, next thing i will do now is set up automatic backups every week or so...
 
mount /dev/sdb2 /media/RESCUE/boot

when i ls into /media/RESCUE/boot, there indeed is no grub directory.
There is an EFI directory
That means sdb2 is not the grub /boot partition you wanted but the EFI boot partition. You should have looked at /media/RESCUE/etc/fstab first to see what's actually mounted where.
 
  • Like
Reactions: anoo222
That means sdb2 is not the grub /boot partition you wanted but the EFI boot partition. You should have looked at /media/RESCUE/etc/fstab first to see what's actually mounted where.

Hi mow, thank u for your response.

i did just look into fstab but i did not really see where grub could be mounted.
This is the output of my fstab (apart from my mountpoints entered by me).

Code:
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=6456-FAE5 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

Code:
root@proxmox:~# blkid | grep 6456-FAE5
/dev/sdb2: UUID="6456-FAE5" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="bb837010-506b-4450-9b6d-c1ba66c26f7c"

As u said indeed sdb2 is the efi boot partition.

I do not have much experience with grub or efi to be honest, so i would really appreciate it if u could tell me how i could find the grub partition for future reference if this issue occurs again in the future.

Thank u
 
Then you don't have a separate grub partition. Easy to tell because there's something in /boot without mounting ;)
Just mount sdb2 to /media/RESCUE/boot/efi and grub shouldn't complain.
 
  • Like
Reactions: anoo222
Hi,

I suffered from the same issue this morning. I shutdown PVE around midnight and start it in the morning again (rtcwake). I didn't touch PVE the day(s) before. Fortunately the steps over at Recovering from grub "disk not found" error when booting from LVM resolved the issue for me, but I'd like to know what caused this issue.

I'm running the latest version of PVE and grub:
Code:
root@pve:~# pveversion
pve-manager/8.0.3/bbf3993334bfa916 (running kernel: 6.2.16-5-pve)
root@pve:~# grub-install --version
grub-install.real (GRUB) 2.06-13

Any ideas on how to prevent this from happening after future reboots?
 
Hi,

Unfortunatly i can't tell u the cause of this because i'm not that knowledged on this matter.

What i can tell u is that ever since this happened to my node, since i created this thread, it never reoccurred. And my proxmox host is up 24/7.

In my case it happened after a sudden power loss from the grid.

I could be wrong but i vaguely remember reading somewhere it could be cause of proxmox host running on a 'cheap' consumer ssd, atleast in my case thats true.
 
Thanks for sharing your experience. I do think the risk of re-surfacing is higher when I reboot PVE every day (instead of keeping it up 24/7)…
 
Could indeed be the case, till we know whats actually the cause of this. Mine only reboots after a power loss or kernel update.

Is your host running on a consumers ssd aswell by any chance?
 
Hi, one cause for disk `lvmid/... not found errors is a grub bug, and the workaround is to trigger an LVM metadata change as described in the wiki [1]. You can find some more context at [2].

However, the bug should actually be fixed in grub 2.06-13 which you are running, so you might actually be seeing a different issue. Did you recently upgrade from PVE 7 to PVE 8, or is this a fresh PVE 8 installation? Could you post the output of the following two commands?
Code:
vgs -S vg_name=pve -o vg_mda_size
vgscan -vvv 2>&1| grep "Found metadata"

[1] https://pve.proxmox.com/wiki/Recove..._"disk_not_found"_error_when_booting_from_LVM
[2] https://forum.proxmox.com/threads/sudden-grub-error-on-update-grub.131433/#post-577463
 
Hi, one cause for disk `lvmid/... not found errors is a grub bug, and the workaround is to trigger an LVM metadata change as described in the wiki [1]. You can find some more context at [2].

However, the bug should actually be fixed in grub 2.06-13 which you are running, so you might actually be seeing a different issue. Did you recently upgrade from PVE 7 to PVE 8, or is this a fresh PVE 8 installation? Could you post the output of the following two commands?
Code:
vgs -S vg_name=pve -o vg_mda_size
vgscan -vvv 2>&1| grep "Found metadata"

[1] https://pve.proxmox.com/wiki/Recover_From_Grub_Failure#Recovering_from_grub_"disk_not_found"_error_when_booting_from_LVM
[2] https://forum.proxmox.com/threads/sudden-grub-error-on-update-grub.131433/#post-577463

This is an upgrade from PVE 7 to 8 indeed.

Sure, output below:
Code:
root@pve:~# vgs -S vg_name=pve -o vg_mda_size
  VMdaSize
   1020.00k
root@pve:~# vgscan -vvv 2>&1| grep "Found metadata"
  Found metadata summary on /dev/sda3 at 20480 size 12624 for VG pve
  Found metadata seqno 943 in mda1 on /dev/sda3
  Found metadata text at 20480 off 16384 size 12624 VG pve on /dev/sda3
 
Last edited:
Thanks! Could you double-check that grub actually shows 2.06-13 (mind the 13) when booting? If it indeed shows the correct version, you might be seeing an issue that is different from the grub bug linked at [1]. However, as the LVM metadata update apparently fixed the issue too, it might also be caused by buggy parsing of the metadata ring buffer in the case it wraps around, similarly to the bug at linked [1].

Unfortunately this is a bit hard to debug in hindsight. But if you are interested in investigating this further, we could check if we see anything suspicious at the end of the metadata ring buffer:
  1. Dump LVM metadata to /tmp/lvmdump by running: lvmdump -m -d /tmp/lvmdump
  2. For each file in /tmp/lvmdump/metadata, dump the last 4096 bytes (tail -c 4096 FILE) and attach the output here.
Note that the metadata contains some information about your system, e.g. the hostname and names of LVs -- feel free to censor information, but please try to retain as much as possible of its original structure.

[1] https://pve.proxmox.com/wiki/Recove..._"disk_not_found"_error_when_booting_from_LVM
 
  • Like
Reactions: melroy89
I'm away from the machine, and I don't have any out-of-bounds access to pre-OS consoles/output. I'll share the grub version when I get back.

This is a dump of tail -c 4096 /tmp/lvmdump/metadata/sda3
Code:
thin_pool = "data"
transaction_id = 121
device_id = 44
}
}

vm-9100-cloudinit {
id = "6EfC4o-zps6-3PAy-4eh3-NVyG-xlU9-OwdPRS"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1653645878
creation_host = "pve"
segment_count = 1

segment1 {
start_extent = 0
extent_count = 1

type = "thin"
thin_pool = "data"
transaction_id = 122
device_id = 45
}
}

vm-207-cloudinit {
id = "G9g67q-PHgX-7s4M-WxQv-Fn77-3XDY-2E1l8Z"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1655619663
creation_host = "pve"
segment_count = 1

segment1 {
start_extent = 0
extent_count = 1

type = "thin"
thin_pool = "data"
transaction_id = 139
device_id = 48
}
}

vm-207-disk-0 {
id = "H6KLoq-XhCv-ipvn-nwKU-VhOo-XPSN-qi1L21"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1655619663
creation_host = "pve"
segment_count = 1

segment1 {
start_extent = 0
extent_count = 5120

type = "thin"
thin_pool = "data"
transaction_id = 140
device_id = 49
}
}

vm-205-cloudinit {
id = "rz1ge7-aWse-uZUr-maIY-e1gU-BVSn-tz3xR5"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1656230963
creation_host = "pve"
segment_count = 1

segment1 {
start_extent = 0
extent_count = 1

type = "thin"
thin_pool = "data"
transaction_id = 161
device_id = 50
}
}

vm-205-disk-0 {
id = "C3usvP-nX9a-3G2x-F44w-AsY7-JeoW-58QUj2"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1656230963
creation_host = "pve"
segment_count = 1

segment1 {
start_extent = 0
extent_count = 6400

type = "thin"
thin_pool = "data"
transaction_id = 162
device_id = 51
}
}

base-9101-disk-0 {
id = "44bRJF-fEGj-O0zz-GnrC-pQdM-MAKP-DaXowK"
status = ["READ", "VISIBLE"]
flags = ["ACTIVATION_SKIP"]
creation_time = 1658844677
creation_host = "pve"
segment_count = 1

segment1 {
start_extent = 0
extent_count = 563

type = "thin"
thin_pool = "data"
transaction_id = 320
device_id = 65
}
}

vm-9101-cloudinit {
id = "qPQvd3-wohy-2MsJ-5Gri-TmhX-XL4I-MWHNNS"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1658844693
creation_host = "pve"
segment_count = 1

segment1 {
start_extent = 0
extent_count = 1

type = "thin"
thin_pool = "data"
transaction_id = 321
device_id = 66
}
}

vm-203-cloudinit {
id = "IxOQKy-PC4S-dNsp-aLxG-1n14-cGYD-t4sgFf"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1664734860
creation_host = "pve"
segment_count = 1

segment1 {
start_extent = 0
extent_count = 1

type = "thin"
thin_pool = "data"
transaction_id = 337
device_id = 68
}
}

vm-203-disk-0 {
id = "ccB5my-U0x3-siQ9-agkJ-10MJ-Ltdt-u61v5S"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1664734861
creation_host = "pve"
segment_count = 1

segment1 {
start_extent = 0
extent_count = 6400

type = "thin"
thin_pool = "data"
transaction_id = 338
device_id = 69
}
}

vm-200-disk-0 {
id = "qYBCg7-Ee1k-ejTl-A0tW-AdfV-wCx1-9LOK63"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1668015061
creation_host = "pve"
segment_count = 1

segment1 {
start_extent = 0
extent_count = 12800

type = "thin"
thin_pool = "data"
transaction_id = 377
device_id = 72
}
}

vm-252-disk-0 {
id = "tCkwQk-iVlQ-MYXf-IMFC-INBg-d7Cs-PaStIC"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1668351938
creation_host = "pve"
segment_count = 1

segment1 {
start_extent = 0
extent_count = 8192

type = "thin"
thin_pool = "data"
transaction_id = 380
device_id = 73
}
}

snap_vm-252-disk-0_clean_install_2022_3 {
id = "2UOzlb-iiwV-vWlh-fFVh-g2iX-PRRP-7KOUoP"
status = ["READ", "VISIBLE"]
flags = ["ACTIVATION_SKIP"]
creation_time = 1668353302
creation_host = "pve"
segment_count = 1

segment1 {
start_extent = 0
extent_count = 8192

type = "thin"
thin_pool = "data"
transaction_id = 381
device_id = 74
origin = "vm-252-disk-0"
}
}

vm-202-cloudinit {
id = "M2fFKC-QlkL-jyGG-g6ve-MAhn-lmhO-iLQMyC"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1671050452
creation_host = "pve"
segment_count = 1

segment1 {
start_extent = 0
extent_count = 1

type = "thin"
thin_pool = "data"
transaction_id = 406
device_id = 75
}
}

vm-202-disk-0 {
id = "p89Zju-sTW9-cWCv
 
I think it must be grub version 2.06-13:

Code:
root@pve:~# grub-install --version
grub-install.real (GRUB) 2.06-13
root@pve:~# dpkg -l | grep grub | grep ii
ii  grub-common                          2.06-13                            amd64        GRand Unified Bootloader (common files)
ii  grub-efi-amd64-bin                   2.06-13                            amd64        GRand Unified Bootloader, version 2 (EFI-AMD64 modules)
ii  grub-pc                              2.06-13                            amd64        GRand Unified Bootloader, version 2 (PC/BIOS version)
ii  grub-pc-bin                          2.06-13                            amd64        GRand Unified Bootloader, version 2 (PC/BIOS modules)
ii  grub2-common                         2.06-13                            amd64        GRand Unified Bootloader (common files for version 2)
root@pve:~# grub-probe --version
grub-probe (GRUB) 2.06-13
root@pve:~# dpkg -l grub-pc
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name           Version      Architecture Description
+++-==============-============-============-=====================================================
ii  grub-pc        2.06-13      amd64        GRand Unified Bootloader, version 2 (PC/BIOS version)
 
Welcome to GRUB!

error: disk `lvmid/p3y5O2-jync-R2Ao-Gtlj-It3j-FZXE-ipEDYG/bApewq-qSRB-zYqT-mzvP-pGiV-VQaf-di4Rcz` not found.
grub rescue>

How I fix it

1. Boot from a live USB/CD/DVD with LVM support, e.g. grml link: https://grml.org/ it a linux with small storage

2. Open terminal in grml by right click and choose xterm

3. First We need to activate LVM and mount the the root partition that is inside the LVM container by type this commend:
type sudo su for root and then type
vgscan
vgchange -ay

4. Create a 4MB logical volume named grubtemp in the pve volume group: lvcreate -L 4M pve -n grubtemp by
type cd /dev/pve
lastly type lvcreate -L 4M pve -n grubtemp

5. Reboot. PVE should boot normally again.

After it boot You can now remove the grubtemp volume: lvremove pve/grubtemp
by open pve shell type cd /dev/pve and type lvremove pve/grubtemp and type y

All done.

Thanks!

reference: https://pve.proxmox.com/wiki/Recover_From_Grub_Failure
 
Sorry for the delay @vincent2, and thanks for checking. I have just encountered this issue on one of my local machines (with grub 2.06-13) too -- grub failed to boot with the "lvmid/*** not found" error. Triggering some metadata update (e.g. by creating a small LV) made the machine boot again. I'll look into this further and report back in this thread.
 
I've actually encountered this aswell again last week when migrating proxmox host from 256gb nvme to 1tb nvme (cloning). But indeed did just trigger a metadata update as described in proxmox wiki and all was good.
Not sure what triggered it, the cloning or while first boot pressing reset button because i forgot to change uuids in fstab.
 
  • Like
Reactions: melroy89
Just had this happen after update + reboot on:

qemu-server (8.0.7) bookworm; urgency=medium​
pve-qemu-kvm (8.0.2-5) bookworm; urgency=medium​

Though I'm guessing it doesn't have anything to do with the update, I'm putting that in here just in case. Maybe the disk was close to full. Unsure. I followed instructions from above and it resolved itself. Thanks for the instructions!

Edit: Just the ran the update on my second box and had no issues, I'm guessing it was related to how full my first box's root LVM was.
 
Last edited:
Welcome to GRUB!

error: disk `lvmid/p3y5O2-jync-R2Ao-Gtlj-It3j-FZXE-ipEDYG/bApewq-qSRB-zYqT-mzvP-pGiV-VQaf-di4Rcz` not found.
grub rescue>

How I fix it

1. Boot from a live USB/CD/DVD with LVM support, e.g. grml link: https://grml.org/ it a linux with small storage

2. Open terminal in grml by right click and choose xterm

3. First We need to activate LVM and mount the the root partition that is inside the LVM container by type this commend:
type sudo su for root and then type
vgscan
vgchange -ay

4. Create a 4MB logical volume named grubtemp in the pve volume group: lvcreate -L 4M pve -n grubtemp by
type cd /dev/pve
lastly type lvcreate -L 4M pve -n grubtemp

5. Reboot. PVE should boot normally again.

After it boot You can now remove the grubtemp volume: lvremove pve/grubtemp
by open pve shell type cd /dev/pve and type lvremove pve/grubtemp and type y

All done.

Thanks!

reference: https://pve.proxmox.com/wiki/Recover_From_Grub_Failure

hey! this did the trick for me, also grub version 2.06-13. Is there anyway to update grub so that this doesnt happen again? Or is this a permanent solution?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!