Error: disk 'lvmid/***' not found, grub rescue.

EDIT: Oh sorry, I somehow missed that the version was already newer :/

EDIT2: It does seem that the original bug is fixed (@fweber used the reproducer from here to get the problematic condition and the bug didn't trigger), but maybe there is another bug.

Hi,
hey! this did the trick for me, also grub version 2.06-13. Is there anyway to update grub so that this doesnt happen again? Or is this a permanent solution?
it should be fixed in Proxmox VE 8/Debian 12 with grub2(-common) >= 2.06-8.1. From the changelog:
Code:
grub2 (2.06-8.1) experimental; urgency=medium

* Non-maintainer upload.
* Fix an issue where a logical volume rename would lead grub to fail to
boot (Closes: #987008)

-- Antoine Beaupré <anarcat@debian.org>  Sat, 25 Feb 2023 15:16:55 -0500

If you are still on Proxmox VE 7, see here for the upgrade guide: https://pve.proxmox.com/wiki/Upgrade_from_7_to_8
 
Last edited:
I have tried to reproduce the grub error in a PVE 8 VM, but have not succeeded so far.

I tried the following: Repeatedly create/remove LVM logical volumes to trigger metadata updates, and reboot once the current metadata starts close to the end of the metadata ring buffer, wraps around and continues at the beginning of the metadata ring buffer. vgscan -vvv indicates this situation with a log message like this (note the size A (+B) with nonzero B, indicating that B bytes reside at the beginning of the metadata ring buffer):
Code:
Reading metadata summary from /dev/... at 1036800 size 11776 (+43548)
If this is the case, reboot. Under PVE 7.4 / Debian Bullseye, grub 2.06-3 then threw the disk 'lvmid/...' not found error and failed to boot. This was due to a bug in the LVM metadata parser in the wraparound case of the metadata ring buffer. However, under PVE 8 / Debian Bookworm, grub 2.06-13 boots just fine.

So the disk 'lvmid/...' not found error on boot with PVE 8 / Debian Bookworm and grub 2.06-13 is probably caused by a different bug. However, since the workaround is the same (trigger an LVM metadata update so it resides in one contiguous section in the metadata ring buffer), the new bug may be related to the wraparound case of the metadata ring buffer as well. However, currently I don't see anything wrong with the LVM metadata parser in grub 2.06-13. So it would be great if we could get some more debugging information.

For anyone who encounters this bug with PVE 8 / grub 2.06-13, could you gather and post the following data before triggering the metadata update workaround? On the live ISO, do the following:
  • Find out the path of the LVM physical volume containing the root partition by running pvs and finding the correct physical volume
  • Post the output of pvdisplay and vgdisplay
  • Post the output of pvck PV --dump headers (replace PV with the physical volume path from above). For example, if the physical volume is /dev/sda3, please provide the output of pvck /dev/sda3 --dump headers
  • Post the output of grub-fstest --version and grub-fstest -v PV ls (replace PV with the physical volume path from above)
  • Dump and gzip the metadata section to a file: head -c 1048576 PV | gzip > /tmp/metadata.gz (replace PV with the physical volume path from above). Please attach the file /tmp/metadata.gz. Note that if you use the LVM volume group (or a thin pool) as VM storage, the metadata will contain e.g. snapshot names, so please only attach them if you're okay with that.
 
Last edited:
  • Like
Reactions: Neobin and fiona
I'm getting the following error which seems to fit the description. The system is a bare metal install of Proxmox 8 that dual boots with Windows 11. Proxmox is on a separate 500GB SATA SSD. Windows 11 is on an on-board NVMe 1TB M.2
Welcome to GRUB!
error: disk
*Ivmid/bu3QgF-AXU2-EZAG-W6xv-tOno-oana-QM2CTn/JOaCOh-5124-00177-14164-EVOH-THET-395300' not found.
grub rescue›

I rebooted using the "Rescue Boot" and it came up fine with everything working but I did notice this timeout job error during boot. No Idea if it is related in any way.

Reached target zis-volumes. target - ZF$ volumes are ready.
***] Job dev-disk-byx2duuid-0Bcaf630\x2d4e69\x2d4fa3\x2d9512\2d3ed08b8ccf0e.device/start running (1min 29s / 1min 30s)

Here is the data you requested

Code:
root@pve:~# update-grub --version
grub-mkconfig (GRUB) 2.06-13
root@pve:~# pvs
  PV         VG  Fmt  Attr PSize    PFree
  /dev/sda3  pve lvm2 a--  <464.76g 16.00g
root@pve:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               pve
  PV Size               464.76 GiB / not usable <3.01 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              118978
  Free PE               4096
  Allocated PE          114882
  PV UUID               0ApKPo-Rj5F-tn4T-bF3n-1hoh-8lOo-PIfjfQ
 
root@pve:~# vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1599
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                11
  Open LV               6
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <464.76 GiB
  PE Size               4.00 MiB
  Total PE              118978
  Alloc PE / Size       114882 / <448.76 GiB
  Free  PE / Size       4096 / 16.00 GiB
  VG UUID               bW3QgF-AxU2-EzA6-W6xv-t0no-oana-QM2CTn
 
root@pve:~# pvck /dev/sda3 -- dump headers
  Found label on /dev/sda3, sector 1, type=LVM2 001
  Found text metadata area: offset=4096, size=1044480
  Cannot use dump: device not found.
  Cannot use headers: device not found.
root@pve:~# grub-fstest --version
grub-fstest (GRUB) 2.06-13
root@pve:~# grub-fstest -v /dev/sda3 ls
grub-fstest: info: Scanning for DISKFILTER devices on disk proc.
grub-fstest: info: Scanning for mdraid1x devices on disk proc.
grub-fstest: info: Scanning for mdraid09 devices on disk proc.
grub-fstest: info: Scanning for mdraid09_be devices on disk proc.
grub-fstest: info: Scanning for dmraid_nv devices on disk proc.
grub-fstest: info: Scanning for lvm devices on disk proc.
grub-fstest: info: Scanning for ldm devices on disk proc.
grub-fstest: info: scanning proc for LDM.
grub-fstest: info: no LDM signature found.
grub-fstest: info: Scanning for DISKFILTER devices on disk loop0.
grub-fstest: info: Scanning for mdraid1x devices on disk loop0.
grub-fstest: info: Scanning for mdraid09 devices on disk loop0.
grub-fstest: info: Scanning for mdraid09_be devices on disk loop0.
grub-fstest: info: Scanning for dmraid_nv devices on disk loop0.
grub-fstest: info: Scanning for lvm devices on disk loop0.
grub-fstest: info: unknown LVM type thin-pool.
grub-fstest: info: unknown LVM type thin.
grub-fstest: info: unknown LVM type thin.
grub-fstest: info: unknown LVM type thin.
grub-fstest: info: unknown LVM type thin.
grub-fstest: info: unknown LVM type thin.
grub-fstest: info: unknown LVM type thin.
grub-fstest: info: unknown LVM type thin.
grub-fstest: info: unknown LVM type thin.
grub-fstest: info: Found array pve.
grub-fstest: info: Inserting loop0 (+0,974673935) into pve (lvm)
.
grub-fstest: info: Scanning for DISKFILTER devices on disk host.
grub-fstest: info: Scanning for mdraid1x devices on disk host.
grub-fstest: info: Scanning for mdraid09 devices on disk host.
grub-fstest: info: Scanning for mdraid09_be devices on disk host.
grub-fstest: info: Scanning for dmraid_nv devices on disk host.
grub-fstest: info: Scanning for lvm devices on disk host.
grub-fstest: info: Scanning for ldm devices on disk host.
grub-fstest: info: scanning host for LDM.
grub-fstest: info: no LDM signature found.
grub-fstest: info: Scanning for DISKFILTER devices on disk lvm/pve-lvol0_pmspare.
grub-fstest: info: Scanning for mdraid1x devices on disk lvm/pve-lvol0_pmspare.
grub-fstest: info: Scanning for mdraid09 devices on disk lvm/pve-lvol0_pmspare.
grub-fstest: info: Scanning for mdraid09_be devices on disk lvm/pve-lvol0_pmspare.
grub-fstest: info: Scanning for dmraid_nv devices on disk lvm/pve-lvol0_pmspare.
grub-fstest: info: Scanning for lvm devices on disk lvm/pve-lvol0_pmspare.
grub-fstest: info: no LVM signature found.
grub-fstest: info: Scanning for ldm devices on disk lvm/pve-lvol0_pmspare.
grub-fstest: info: scanning lvm/pve-lvol0_pmspare for LDM.
grub-fstest: info: no LDM signature found.
grub-fstest: info: Scanning for DISKFILTER devices on disk lvm/pve-data_tmeta.
grub-fstest: info: Scanning for mdraid1x devices on disk lvm/pve-data_tmeta.
grub-fstest: info: Scanning for mdraid09 devices on disk lvm/pve-data_tmeta.
grub-fstest: info: Scanning for mdraid09_be devices on disk lvm/pve-data_tmeta.
grub-fstest: info: Scanning for dmraid_nv devices on disk lvm/pve-data_tmeta.
grub-fstest: info: Scanning for lvm devices on disk lvm/pve-data_tmeta.
grub-fstest: info: no LVM signature found.
grub-fstest: info: Scanning for ldm devices on disk lvm/pve-data_tmeta.
grub-fstest: info: scanning lvm/pve-data_tmeta for LDM.
grub-fstest: info: no LDM signature found.
grub-fstest: info: Scanning for DISKFILTER devices on disk lvm/pve-data_tdata.
grub-fstest: info: Scanning for mdraid1x devices on disk lvm/pve-data_tdata.
grub-fstest: info: Scanning for mdraid09 devices on disk lvm/pve-data_tdata.
grub-fstest: info: Scanning for mdraid09_be devices on disk lvm/pve-data_tdata.
grub-fstest: info: Scanning for dmraid_nv devices on disk lvm/pve-data_tdata.
grub-fstest: info: Scanning for lvm devices on disk lvm/pve-data_tdata.
grub-fstest: info: no LVM signature found.
grub-fstest: info: Scanning for ldm devices on disk lvm/pve-data_tdata.
grub-fstest: info: scanning lvm/pve-data_tdata for LDM.
grub-fstest: info: no LDM signature found.
grub-fstest: info: Scanning for DISKFILTER devices on disk lvm/pve-root.
grub-fstest: info: Scanning for mdraid1x devices on disk lvm/pve-root.
grub-fstest: info: Scanning for mdraid09 devices on disk lvm/pve-root.
grub-fstest: info: Scanning for mdraid09_be devices on disk lvm/pve-root.
grub-fstest: info: Scanning for dmraid_nv devices on disk lvm/pve-root.
grub-fstest: info: Scanning for lvm devices on disk lvm/pve-root.
grub-fstest: info: no LVM signature found.
grub-fstest: info: Scanning for ldm devices on disk lvm/pve-root.
grub-fstest: info: scanning lvm/pve-root for LDM.
grub-fstest: info: no LDM signature found.
grub-fstest: info: Scanning for DISKFILTER devices on disk lvm/pve-swap.
grub-fstest: info: Scanning for mdraid1x devices on disk lvm/pve-swap.
grub-fstest: info: Scanning for mdraid09 devices on disk lvm/pve-swap.
grub-fstest: info: Scanning for mdraid09_be devices on disk lvm/pve-swap.
grub-fstest: info: Scanning for dmraid_nv devices on disk lvm/pve-swap.
grub-fstest: info: Scanning for lvm devices on disk lvm/pve-swap.
grub-fstest: info: no LVM signature found.
grub-fstest: info: Scanning for ldm devices on disk lvm/pve-swap.
grub-fstest: info: scanning lvm/pve-swap for LDM.
grub-fstest: info: no LDM signature found.
(proc) (loop0) (host) (lvm/pve-root) (lvm/pve-swap)
root@pve:~# head -c 1048576 /dev/sda3 | gzip > /tmp/metadata.gz
root@pve:~# ls /tmp/met*
/tmp/metadata.gz
root@pve:~#
 

Attachments

  • metadata.gz
    15.5 KB · Views: 2
Last edited:
Impacted as well, though it may be user error. I was in my windows 11 gaming partition when things just slowed and stopped. The node dropped out of the sync

Code:
2023-09-22T18:22:38.370155-04:00 mara corosync[1471]:   [TOTEM ] Token has not been received in 5175 ms
2023-09-22T18:22:40.095206-04:00 mara corosync[1471]:   [TOTEM ] A processor failed, forming new configuration: token timed out (6900ms), waiting 8280ms for consensus.
2023-09-22T18:22:41.130304-04:00 mara pvestatd[1530]: pbs: error fetching datastores - 500 Can't connect to pbs.local.technohouser.com:8007 (Connection timed out)

I'm in this thread like many I suspect, who found themselves dumped into the grub rescue menu with an lvm not found. I started on Proxmox 7.x and upgraded to 8.x soon after it was available using the 3rd party tteck upgrade script. It's been sometime since that upgrade, but I figured I'd throw that out there.

The workaround worked as advertised on my zen4 amd. Thank you very much and any detail I can add to identify the lingering cause I'll certainly help with.
 
Impacted as well, though it may be user error. I was in my windows 11 gaming partition when things just slowed and stopped. The node dropped out of the sync

Code:
2023-09-22T18:22:38.370155-04:00 mara corosync[1471]:   [TOTEM ] Token has not been received in 5175 ms
2023-09-22T18:22:40.095206-04:00 mara corosync[1471]:   [TOTEM ] A processor failed, forming new configuration: token timed out (6900ms), waiting 8280ms for consensus.
2023-09-22T18:22:41.130304-04:00 mara pvestatd[1530]: pbs: error fetching datastores - 500 Can't connect to pbs.local.technohouser.com:8007 (Connection timed out)

I'm in this thread like many I suspect, who found themselves dumped into the grub rescue menu with an lvm not found.
Thanks for the report. It seems like one node reset itself (likely due to unrelated reasons, I'd suggest to check the logs), and hit the grub error after it rebooted. Good to hear that the workaround got it to boot again.

@tannebil thanks a lot for providing the detailed information! I have a few follow-up questions:

1) Is this a fresh installation of PVE 8, or did you upgrade from an earlier version?

I rebooted using the "Rescue Boot" and it came up fine with everything working [...]
2) Could you elaborate what you mean by "Rescue Boot"? Do I understand correctly that you managed to boot into PVE, and only then ran the workaround described in [1]?
3) Could you post the output the following command?
Code:
efibootmgr -v

[1] https://pve.proxmox.com/wiki/Recover_From_Grub_Failure#Recovering_from_grub_"disk_not_found"_error_when_booting_from_LVM
 
Last edited:
Today I had this "disk not found" problem in grub rescue. I have Proxmox version 8.0 (upgraded from 7.x a few weeks ago).

It has been successfully fixed with the prompts of "https://pve.proxmox.com/wiki/Recover_From_Grub_Failure".

I'm really sorry I didn't read this entire thread before applying the solution and I had only checked a few things before fix it:

Code:
root@pve:~# vgs -S vg_name=pve -o vg_mda_size
   VMdaSize
    1020.00k

Code:
root@pve:~# vgscan -vvv 2>&1| grep "Found metadata"
(this command has not generated any output).

After the correction I have validated this:

Code:
root@proxmox:~# pveversion
pve-manager/8.0.4/d258a813cfa6b390 (running kernel: 6.2.16-6-pve)
root@proxmox:~# grub-install --version
grub-install.real (GRUB) 2.06-13

In response to the last comment and for what it's worth, I have also validated the following:

Code:
root@proxmox:~# efibootmgr -v
BootCurrent: 0002
Timeout: 1 seconds
BootOrder: 0002,0001,0004,0000
Boot0000* Windows Boot Manager VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)WINDOWS......x...B.C.D.O.B.J.E.C.T.=.{.9.d.e.a.8.6.2.c.-.5.c.d.d. -.4.e.7.0.-.a.c.c.1.-.f.3.2.b.3.4.4.d.4.7.9.5.}...&.............. ..
Boot0001* UEFI : LAN : PXE IP4 Intel(R) Ethernet Connection (6) I219-V PciRoot(0x0)/Pci(0x1f,0x6)/MAC(1c697a68b020,0)/IPv4(0.0.0.00.0.0.0,0 ,0)..BO
Boot0002* proxmox HD(2,GPT,4a8454fe-0772-4736-9a2d-8b7766a0b127,0x800,0x100000)/File(\EFI\proxmox\grubx64.efi)
Boot0004* UEFI : LAN : PXE IP6 Intel(R) Ethernet Connection (6) I219-V PciRoot(0x0)/Pci(0x1f,0x6)/MAC(1c697a68b020,0)/IPv6([::]:<->[ ::]:,0,0)..BO

I don't know why the error happened or when exactly it happened. Today I found that none of the LXC or VMs I have were responding, and neither was proxmox responding (neither via web nor via ssh). I restarted my NUC with the power off button. When booting, it was also unresponsive and I connected the monitor to find the 'grub rescue' message.

Fortunately, it was solved by applying the steps indicated in the documentation.

I'm newbie and I don't understand most of the things I do, but I hope this information can be of some use to someone.
 
Last edited:
Thanks all for the help with debugging this! As it turns out, the "lvmid/..." not found error on boot can indeed still occur on UEFI systems that were upgraded to PVE 8. Fresh installations of PVE 8 should not be affected. I reproduced the issue locally now. We'll look into it and update you in this thread.
 
  • Like
Reactions: Neobin
Hi all, thanks for the help with debugging this and sorry for the delay. As it turns out, hosts upgraded from PVE 7 to PVE 8 and booting in UEFI mode could still be affected by this bug, because the grub EFI binary on the ESP was still running the old buggy grub code.

This was the case because the grub-pc metapackage for legacy mode was installed on UEFI systems instead of the correct grub-efi-amd64 metapackage for UEFI mode, so the grub EFI binaries were not updated during the PVE7->8 upgrade. The permanent fix on PVE 8 is to install the correct grub metapackage for UEFI mode. We have added a new section with the permanent fix for PVE 8 to the "Recover from grub failure" wiki page [1], as well as a note to the upgrade guide [2].

@Stoiko Ivanov has also prepared some patches (not yet applied) to check for the correct grub metapackage in pve7to8 and after kernel upgrades [3], as well as a patch to install the correct grub metapackage when installing from the ISO [4].

If you have any questions/comments, please let me know.

[1] https://pve.proxmox.com/wiki/Recover_From_Grub_Failure#PVE_8
[2] https://pve.proxmox.com/wiki/Upgrade_from_7_to_8#GRUB_Might_Fail_To_Boot_From_LVM_in_UEFI_Mode
[3] https://lists.proxmox.com/pipermail/pve-devel/2023-October/059438.html
[4] https://lists.proxmox.com/pipermail/pve-devel/2023-September/059270.html
 
If you have any questions/comments, please let me know.
Thank you for this. However, the updated wiki page seems to imply that legacy-booting systems are not impacted - but we have 1 that is?

Systems with Root on ZFS and systems booting in legacy mode are not affected.

https://forum.proxmox.com/threads/p...b-install-real-gives-disk-not-found-i.135235/

I sent over the debugging info however this is for PVE7. Would appreciate your clarity over there and on the above clarity as well. Thank you!
 
Thank you for this. However, the updated wiki page seems to imply that legacy-booting systems are not impacted - but we have 1 that is?



https://forum.proxmox.com/threads/p...b-install-real-gives-disk-not-found-i.135235/

I sent over the debugging info however this is for PVE7. Would appreciate your clarity over there and on the above clarity as well. Thank you!
Sorry, did not see your reply here before answering over at [1]. Actually PVE 7 systems are affected regardless of the boot mode -- I tried to make this clear in the wiki page too [2]:
This section applies to the following setups:
  • PVE 7.4 (or earlier) hosts with their boot disk on LVM
  • PVE 8 hosts that have their boot disk on LVM, boot in UEFI mode and were upgraded from PVE 7

If you have any suggestions how to improve the wording, let me know.

[1] https://forum.proxmox.com/threads/p...b-install-real-gives-disk-not-found-i.135235/
[2] https://pve.proxmox.com/wiki/Recove...disk_not_found.22_error_when_booting_from_LVM
 
I'd say this segment only could benefit from being more clear about legacy mode, as it currently sounds like it is OK (bug does not apply).

https://pve.proxmox.com/wiki/Upgrade_from_7_to_8#GRUB_Might_Fail_To_Boot_From_LVM_in_UEFI_Mode

Thanks again for all of your work on this. 1 in 120-250, what odds we strike!
Thanks for the feedback! However, note that the article [1] specifically targets the PVE 7 -> 8 upgrade. After the upgrade to PVE 8, hosts booting in legacy mode are indeed not affected by the grub bug anymore. An additional step after the upgrade (installing the correct metapackage) is necessary only for hosts booting in UEFI mode.

[1] https://pve.proxmox.com/wiki/Upgrade_from_7_to_8#GRUB_Might_Fail_To_Boot_From_LVM_in_UEFI_Mode
 
Hi all, thanks for the help with debugging this and sorry for the delay. As it turns out, hosts upgraded from PVE 7 to PVE 8 and booting in UEFI mode could still be affected by this bug, because the grub EFI binary on the ESP was still running the old buggy grub code.

This was the case because the grub-pc metapackage for legacy mode was installed on UEFI systems instead of the correct grub-efi-amd64 metapackage for UEFI mode, so the grub EFI binaries were not updated during the PVE7->8 upgrade. The permanent fix on PVE 8 is to install the correct grub metapackage for UEFI mode. We have added a new section with the permanent fix for PVE 8 to the "Recover from grub failure" wiki page [1], as well as a note to the upgrade guide [2].

@Stoiko Ivanov has also prepared some patches (not yet applied) to check for the correct grub metapackage in pve7to8 and after kernel upgrades [3], as well as a patch to install the correct grub metapackage when installing from the ISO [4].

If you have any questions/comments, please let me know.

[1] https://pve.proxmox.com/wiki/Recover_From_Grub_Failure#PVE_8
[2] https://pve.proxmox.com/wiki/Upgrade_from_7_to_8#GRUB_Might_Fail_To_Boot_From_LVM_in_UEFI_Mode
[3] https://lists.proxmox.com/pipermail/pve-devel/2023-October/059438.html
[4] https://lists.proxmox.com/pipermail/pve-devel/2023-September/059270.html
Thanks for [1], I've had the issue few days ago, booted with PVE USB dongle and applied the temporary fix. Then I've rebooted and applied the permanent fix and everything seems fine. BTW I had to boot the console install not the debug mode, because in the debug mode my USB keyboard was not working even if it was seen as USB device. Nice job, thanks for the support!
 
  • Like
Reactions: linux and fweber
Hello,
I was too affected by this bug. I am running proxmox 7.3.3 and not ready or comfortable enough to jump to proxmox 8.

I am always updating my server, so I wasn't expecting this kind of bug, failure or unreliability (is this a word?) to happen.

Is there a temporary fix within 7.x.x? Like updating the binaries of grub or whatever, without having to jump to 8?

It was a major anoyance for me to have to come physically where the server was. I restarted it remotely.... and BANG

But hey, at least for now it is solved. Remediated...
The 4MB volume solution did it.

Can it be SOLVED and staying in 7?

Thank you all!
 
Is there a temporary fix within 7.x.x? Like updating the binaries of grub or whatever, without having to jump to 8?
I don't see a reliable way to fix this permanently while staying on PVE 7 / Debian Bullseye. I guess you could in theory build a grub2 package with the fix, but I wouldn't recommend it (messing with the bootloader packages might cause other problems in the future).

If you always reboot manually, you could check whether there is currently a metadata wraparound -- as described in the wiki article [1]:
Code:
vgscan -vvv 2>&1 | grep "Reading metadata"
If the lines end with (+N) where N is not zero, there is a wraparound and grub will most likely fail to boot after the reboot. To avoid this, you could trigger another metadata change before rebooting (e.g. by adding a small volume), verify that there is no wraparound anymore, and reboot then.

[1] https://pve.proxmox.com/wiki/Recover_From_Grub_Failure#PVE_7.x
 
It worries me that we still don't know the root case is many cases (apart from the so-called grub bug that if properly fixed in newer versions). I personally was forced to fully REINSTALL Proxmox 8.x from scratch, since those metadata changes didn't work for me. I still have nightmares about this whole situation, hopefully this clean install works. If you got other workarounds then the one currently documented in the wiki, please share it! And update the wiki, since in my case my I didn't even have space for creating a temp LVM volume.

Anyhow, I suspect some BIOS update that caused the problem. Maybe during a reboot, BIOS or any other unknown reason the LVM unique IDs or PV UUID starts changing for some reason?!?

I'm now planning to create snapshots of Proxmox itself X_X.. using Clonezilla.

Ps. I'm running PVE v8.1.3 on kernel 6.5.11-7 with Grub v2.06-13+pmx1.
 
It worries me that we still don't know the root case is many cases (apart from the so-called grub bug that if properly fixed in newer versions). I personally was forced to fully REINSTALL Proxmox 8.x from scratch, since those metadata changes didn't work for me. I still have nightmares about this whole situation, hopefully this clean install works. If you got other workarounds then the one currently documented in the wiki, please share it! And update the wiki, since in my case my I didn't even have space for creating a temp LVM volume.
I suppose you also got the "disk lvmid/... not found" error by grub after a reboot? Did you run PVE 7, or upgrade a UEFI installation to PVE 8?

The most likely cause of this error (on PVE 7 or UEFI installations upgraded to PVE 8) does seem to be the mentioned grub bug, but in principle there could be other reasons why grub fails to read the LVM metadata with that error, the most obvious one being a corrupted disk.

In your case, it seems like the live system did recognize the pve VG, so a corrupted disk seems less likely. However, the VG was completely full, so it was not possible to trigger an LVM metadata update by creating a new logical volume. Since creating an LV is the most straightforward way to trigger a metadata update, the wiki page spells out the commands for that one only, but it does mention that there are other ways [1]:

Code:
Note that there are many other options for triggering a metadata update, e.g. using lvchange to extend an existing logical volume or add a tag to an existing logical volume.

Anyhow, I suspect some BIOS update that caused the problem. Maybe during a reboot, BIOS or any other unknown reason the LVM unique IDs or PV UUID starts changing for some reason?!?
As the PV UUIDs are stored on the disks themselves, I would not expect a BIOS upgrade to cause problems here.

[1] https://pve.proxmox.com/wiki/Recove...disk_not_found.22_error_when_booting_from_LVM
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!