Failed to boot from UEFI VM Disk

Jan 23, 2020
8
3
1
33
I have a trouble in proxmox VM of cluster(Proxmox 6.1).

BdsDxe: failed to load Boot0001 "UEFI QEMU HARDDISK QM00013 from PciRoot(0x0)/Pci(0x1E,0x0)/Pci(0x1,0x0)/Pci(0x7,0x0)/Sata(0x0,0xFFFF,0x0)

The VM was working with the manual migration but when my cluster node was failed it is dropping on shell prompt after uefi boot error.
In my VM conf
EFI disk is attached.
OVMF is enabled.

How can I resolve this issue and assure that it will not behave again like that?
 
Hi,

can you please send the <vm>.config
 
Hi,

can you please send the <vm>.config

I've got similar issue. Here is my VM config:
Code:
agent: 1
bios: ovmf
bootdisk: virtio0
cores: 6
cpu: host
efidisk0: local-zfs:vm-200-disk-1,size=1M
machine: q35
memory: 32768
name: Debian
net0: virtio=FA:CA:A1:C1:A7:10,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=98c05c4a-2dbe-4fd9-8f56-62c360ea344d
sockets: 1
startup: order=2,up=400
tablet: 0
virtio0: local-zfs:vm-200-disk-0,size=300G
vmgenid: 38683a2f-5bee-4155-a7b2-183048a9ab51
 
Hi,

can you please send the <vm>.config
I also JUST had this happen to me with one of my VMs:

agent: 1
bios: ovmf
bootdisk: sata0
cores: 4
efidisk0: ZFS-Pool:vm-104-disk-0,size=1M
memory: 4096
name: HassIO
net0: virtio=06:B6:FE:A2:9C:7E,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
sata0: ZFS-Pool:vm-104-disk-1,size=6G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=dacc1376-19fb-4ffd-a9da-60a39bbf044f
sockets: 1
vmgenid: 3d82a134-744f-460c-bdbd-861c29e0395e
 
Hi,
I am having the same issue as well.
could you share the VM configuration and tell us how the EFI Disk was created? When trying to reproduce the issue, I found a problem with full clones of shut down VMs/Templates with an EFI Disk. The EFI Disk currently is not copied correctly in that case, I sent a patch.

EDIT: If you recently upgraded from a version before PVE 6.2, have a look here.
 
Last edited:
I'm pretty sure I used this script to setup the VM: https://github.com/whiskerz007/proxmox_hassos_install

Here is my config

Code:
agent: 1
bios: ovmf
bootdisk: sata0
cores: 2
efidisk0: local-zfs:vm-101-disk-0,size=1M
machine: pc-q35-3.1
memory: 8192
name: hassos
net0: virtio=C6:69:6E:18:24:E8,bridge=vmbr0
numa: 0
ostype: l26
sata0: local-zfs:vm-101-disk-1,size=32G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=8fdf5c9d-2a24-4587-bfc5-2a70400e5296
sockets: 1
vmgenid: bc275848-7795-4727-80b2-b34d259af6dc
 
I have tried to use the link you suggested, however when I select add boot option the following screen is empty.
 
I tried the same using the script from whiskerz to auto create a HA VM but ending up in errors:

1607368280024.png

Somebody that can point me to the right solution ?

Thx Dutchborg
 
it is a new setup.

Starting too play with HA so started with a NUC installing proxmox and then the whiskerz his script.

So i'm new in proxmox and HA, and allready running into problems :p
 
This is my VM.config:

agent: 1
balloon: 1024
bios: ovmf
boot: order=net0
bootdisk: sata0
cores: 4
efidisk0: local-lvm:vm-100-disk-0,size=4M
memory: 8192
name: hassosova-4.19
net0: virtio=DE:2D:3F:44:BD:00,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
sata0: local-lvm:vm-100-disk-1,size=100G
scsihw: virtio-scsi-pci
smbios1: uuid=cd928363-f25b-4a29-a12a-8e891a495e15
sockets: 1
vmgenid: f9ba2fbe-8f81-449d-aa45-83a5991962ef
 
I just managed to fix this. Looks like the MBR on my boot disk was corrupt.
I discovered this by running
Code:
gdisk -l /dev/zvol/rpool/data/vm-101-disk-1

Then I ran
Code:
gdisk /dev/zvol/rpool/data/vm-101-disk-1
w
to write the changes. All worked after that.
 
I am having the same issue as well.
I have faced this issue many times and visited different threads to resolve the issue and got nothing.

Finally I conclude that the "Partition Table" of the virtual disk goes corrupt because of following-

-> If you have a VM running with HA in a cluster node and you forcefully shutdown the node by pressing power off button.
Don't do that.

-> If your VM stuck in any operation (like backup, Migration, etc.) and this time you fire the "qm stop" command.
Restart the base node instead of Reset/reboot/shutdown/stop the VM.

Now I am very sincere about the VM backup and avoids the above operations.
 
  • Like
Reactions: meichthys
I just managed to fix this. Looks like the MBR on my boot disk was corrupt.
I discovered this by running
Code:
gdisk -l /dev/zvol/rpool/data/vm-101-disk-1

Then I ran
Code:
gdisk /dev/zvol/rpool/data/vm-101-disk-1
w
to write the changes. All worked after that.
Saved the day :) thanks
 
I just managed to fix this. Looks like the MBR on my boot disk was corrupt.
I discovered this by running
Code:
gdisk -l /dev/zvol/rpool/data/vm-101-disk-1

Then I ran
Code:
gdisk /dev/zvol/rpool/data/vm-101-disk-1
w
to write the changes. All worked after that.
You my esteemed sir are a rock star!! saved my day
 
This worked for me as well. It wasn't booting to the install disk. I ran this command on my EFI drive and it got through to booting from the ISO.
 
I'm facing the same issue.
Proxmox crashed during the night and I had to force the restart this morning. One VM was running and since don't want to boot.
Tried the gdisk operation but had no success.

My disk has now "a valid GPT with protective MBR" but it's not helping on boot.

When I restart from an earlier snapshot it starts fine. But I want to get back the data between the snapshot and the crash.

Any idea how I could restart the VM?

I do not have HA.

VM.conf

Code:
agent: 1
bios: ovmf
boot: order=scsi0;net0
cores: 2
cpu: x86-64-v2-AES
efidisk0: NFS-Storage:101/vm-101-disk-1.qcow2,efitype=4m,pre-enrolled-keys=1,size=528K
memory: 2048
meta: creation-qemu=8.1.2,ctime=1704556203
name: L-TT-1
net0: virtio=BC:24:11:C9:65:1E,bridge=vmbr1,firewall=1
numa: 0
ostype: l26
parent: Before_BootPartition
scsi0: NFS-Storag:101/vm-101-disk-0.qcow2,iothread=1,size=45G
scsihw: virtio-scsi-single
smbios1: uuid=cef5db83-297f-4eff-b6f7-62a3f2b4e2c8
sockets: 1
vmgenid: 732fa0ff-e436-403d-a21a-daac25da4ba8

EDIT 01/22/24
I've succeeded in fixing the boot.
The entry in the BIOS is kind of "corrupted".

When starting the VM I pressed escape until going in the Bios menu.
Then Boot Maintenance Manager > Boot options > Add a boot options

Select the disk > then EFI > <debian> (My distro) > shimx64.efi
And name it as u like.

I've moved it up so it boot first and it's working back.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!