[SOLVED] Debian VM Issue - Error "Failed: grub-install --target=x86_64-efi"

scyto

Active Member
Aug 8, 2023
498
93
28
I have done a lot of research (googling) and i am stumped by this error, especially as the VM boot perfectly.

I am hoping some more experienced in EFI can help me fix it (the error is annoying and scary, but ultimately [currently] harmless as the VM boots ok)

Code:
Replacing config file /etc/default/grub with new version
Installing for x86_64-efi platform.
grub-install: warning: efivarfs_get_variable: open(/sys/firmware/efi/efivars/blk0-47c7b225-c42a-11d2-8e57-00a0c969723b): No such file or directory.
grub-install: warning: efi_get_variable: ops->get_variable failed: No such file or directory.
grub-install: warning: device_get: could not access /sys/block/sda/device/device: No such file or directory.
grub-install: warning: get_file: could not open file "/sys/devices/pci0000:00/firmware_node/path" for reading: No such file or directory.
grub-install: warning: get_file: could not open file "/sys/devices/pci0000:00/firmware_node/hid" for reading: No such file or directory.
grub-install: warning: parse_acpi_hid_uid: could not read devices/pci0000:00/firmware_node/hid: No such file or directory.
grub-install: warning: device_get: parsing pci_root failed: No such file or directory.
grub-install: warning: efi_va_generate_file_device_path_from_esp: could not get ESP disk info: No such file or directory.
grub-install: warning: efi_generate_file_device_path_from_esp: could not generate File DP from ESP: No such file or directory.
grub-install: error: failed to register the EFI boot entry: No such file or directory.
Failed: grub-install --target=x86_64-efi
WARNING: Bootloader is not properly installed, system may not be bootable
Generating grub configuration file ...

Background
  1. I have 3 'identical' VMs that comprise my docker swarm
  2. They were originally installed on Hyper-V as Gen2 VMs (i.e. they used uefi from the get go)
  3. I migrated them to Proxmox over a year ago
  4. they are all configured identially in proxmox
  5. I updated them all from bullseye to bookworm today (note this error message predates that)
  6. they started life as bullseye
  7. they all boot fine
  8. two nodes don't have this error, one node does have this error
Info on the node with the error:
  1. its efivars is present and fully populated
  2. /boot/EFI exists and populated
  3. only difference i can see is this:
Code:
node 1
------
alex@Docker01:~$ efibootmgr -v
BootCurrent: 0003
Timeout: 3 seconds
BootOrder: 0003,0009,0001,0000
Boot0000* UiApp FvVol(7cb8bdc9-f8eb-4f34-aaea-3ee4af6516a1)/FvFile(462caa21-7614-4503-836e-8ab6f4662331)
Boot0001* UEFI QEMU QEMU HARDDISK       PciRoot(0x0)/Pci(0x1e,0x0)/Pci(0x4,0x0)/Pci(0x1,0x0)/SCSI(0,0)N.....YM....R,Y.
Boot0003* debian        HD(1,GPT,6f21ca1e-aab8-4f46-934c-f9309a51f3c0,0x800,0x100000)/File(\EFI\debian\shimx64.efi)
Boot0009* EFI Grub Boot PciRoot(0x0)/Pci(0x1e,0x0)/Pci(0x4,0x0)/Pci(0x1,0x0)/SCSI(0,0)/HD(1,GPT,6f21ca1e-aab8-4f46-934c-f9309a51f3c0,0x800,0x100000)/File(\EFI\debian\grubx64.efi)


node 2
------
alex@Docker02:~$ efibootmgr -v
BootCurrent: 0003
Timeout: 3 seconds
BootOrder: 0003,0009,0001,0000
Boot0000* UiApp FvVol(7cb8bdc9-f8eb-4f34-aaea-3ee4af6516a1)/FvFile(462caa21-7614-4503-836e-8ab6f4662331)
Boot0001* UEFI QEMU QEMU HARDDISK       PciRoot(0x0)/Pci(0x1e,0x0)/Pci(0x4,0x0)/Pci(0x1,0x0)/SCSI(0,0)N.....YM....R,Y.
Boot0003* debian        HD(1,GPT,84aa1155-ad29-4886-b4dd-28b0914422d8,0x800,0x100000)/File(\EFI\debian\shimx64.efi)
Boot0009* EFI grubx64.efi boot  PciRoot(0x0)/Pci(0x1e,0x0)/Pci(0x4,0x0)/Pci(0x1,0x0)/SCSI(0,0)/HD(1,GPT,84aa1155-ad29-4886-b4dd-28b0914422d8,0x800,0x100000)/File(\EFI\debian\grubx64.efi)



node 3 (node with error)
------
alex@docker03:/sys/firmware/efi/efivars$ efibootmgr -v
BootCurrent: 000A
Timeout: 3 seconds
BootOrder: 000A,0003,0000
Boot0000* UiApp FvVol(7cb8bdc9-f8eb-4f34-aaea-3ee4af6516a1)/FvFile(462caa21-7614-4503-836e-8ab6f4662331)
Boot0003* UEFI QEMU QEMU HARDDISK       PciRoot(0x0)/Pci(0x1e,0x0)/Pci(0x4,0x0)/Pci(0x1,0x0)/SCSI(0,0)N.....YM....R,Y.
Boot000A* grubx64.efi   PciRoot(0x0)/Pci(0x1e,0x0)/Pci(0x4,0x0)/Pci(0x1,0x0)/SCSI(0,0)/HD(1,GPT,b2ac3bd0-0387-4d13-8efb-8931fe90843c,0x800,0x100000)/File(\EFI\debian\grubx64.efi)

What i have tried:
  • various re-install and recheck commands of grub-install
  • tried reinstalling the efi platform package
  • used grep efi /proc/self/mounts to check efi is mounted the same way on all nodes
  • used various mixtures of apt-get install --reinstall grub-efi. and grub-install and update-grub
  • all the steps here https://wiki.debian.org/GrubEFIReinstall (troubleshooting doesn't cover this scenario)
What i have found:

That blk0-47c7b225-c42a-11d2-8e57-00a0c969723b is 'mythical' i actually found it mentioned here

https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1868553#:~:text=Current workaround (but doesn't stop apt/dpkg failures) is,nvme$ {i}n1" -l \EFI\ ubuntu [&grubx64&]. efi; done

I am not installing on an MD device, raid controller or anything else - sda is a simple disk

I assume there is / was some bug in one of the grub/efi packages that i hit

this https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=955390 states later shim files fixed the issue - which is odd as i think i have those already....

the question is - can i do anything to fix it?
 
Last edited:
accidentally fixed by one of these things:

  1. reinstall grub with the novram option sudo grub-install --no-nvram
  2. shutting down and then entering the virtual bios for the VM and inspecting all the disk boot settings (i.e. whats in the nvram) without saving any changes
  3. reboot
 
  • Like
Reactions: SteveITS