Unable to boot OVMF VM on Proxmox 6

mimino

Member
Nov 18, 2019
15
0
21
50
I've been unsuccessful in my attempts to boot HassOS VM created from the VDI image (https://github.com/home-assistant/hassos/releases/download/3.5/hassos_ova-3.5.vdi.gz). I've tried VMDK as well with the same result. Images are not corrupt, run fine on Virtual Box.

Upon boot I get this error:
BdsDxe: failed to load Boot0001 "UEFI QEMU HARDDISK QM00013 from PciRoot(0x0)/Pci(0x1E,0x0)/Pci(0x1,0x0)/Pci(0x7,0x0)/Sata(0x0,0xFFFF,0x0)

Eventually the boot process drops to UEFI shell and mapping table shows only one "BLK0" device. It's like the SATA0 disk attached to the VM is not seen by the UEFI.
Does anyone know what's going on here?
Thanks in advance.

VM Config:
Code:
acpi: 0
balloon: 0
bios: ovmf
bootdisk: sata0
cores: 1
cpu: qemu64
efidisk0: local:109/vm-109-disk-0.qcow2,size=128K
kvm: 0
machine: q35
memory: 1024
name: hassosova-3.5
net0: virtio=0A:D3:EB:9B:54:E5,bridge=vmbr0
numa: 0
onboot: 1
ostype: other
sata0: local:109/vm-109-disk-1.qcow2,size=6G
scsihw: virtio-scsi-pci
smbios1: uuid=98274f15-8de0-460c-91cb-a6eb13c604dd
sockets: 1
vmgenid: cc5ba15b-6c83-495b-8770-4f0844d39ca7

Package versions:
Code:
proxmox-ve: 6.0-2 (running kernel: 5.0.21-3-pve)
pve-manager: 6.0-12 (running version: 6.0-12/0a603350)
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-4.15: 5.4-9
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-5.0.21-2-pve: 5.0.21-7
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-9-pve: 4.15.18-30
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-3
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-7
libpve-guest-common-perl: 3.0-2
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.0-9
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-8
pve-cluster: 6.0-7
pve-container: 3.0-10
pve-docs: 6.0-8
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-4
pve-ha-manager: 3.0-3
pve-i18n: 2.0-3
pve-qemu-kvm: 4.0.1-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-13
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
 
Hi all

With the latest updates Proxmox did. they broke also the boot "flag" on the disks i guess.
I tried to import a disk from a .vhdx and it didn't boot after the import.
I did a downgrade on the packages and it worked.
They must have messed something up.
 
Hi all

With the latest updates Proxmox did. they broke also the boot "flag" on the disks i guess.
I tried to import a disk from a .vhdx and it didn't boot after the import.
I did a downgrade on the packages and it worked.
They must have messed something up.
If it was only the boot "flag" then why would this disk be completely invisible in the UEFI? I've tried multiple OVMF images from different vendors with the same result, so it's not that.
My guess is that it has something to do with kernel/KVM. Apparently pve-kernel-5.3 has some of the boot problems fixed (like booting pfSense). Not sure if any of the fixes are UEFI related, haven't had a chance to install and test it yet.
 
Just installed 5.3 to give it a shot, no dice. UEFI still can't see underlying disks. This is super frustrating as I have machines that have been hard down for over a week now.....
 
Just installed 5.3 to give it a shot, no dice. UEFI still can't see underlying disks. This is super frustrating as I have machines that have been hard down for over a week now.....
I second that. New kernel doesn't solve the issue.
 
If it was only the boot "flag" then why would this disk be completely invisible in the UEFI?
Because OVMF (the UEFI for VMs) doesn't looks a disks without this flag at all. This is an optimization to reduce boottime.
There were people whith >10 or even >100 disks, where OVMF required several seconds to minutes to scan, and process those all. No bootindex, not visible for OVMF.
 
Because OVMF (the UEFI for VMs) doesn't looks a disks without this flag at all. This is an optimization to reduce boottime.
There were people whith >10 or even >100 disks, where OVMF required several seconds to minutes to scan, and process those all. No bootindex, not visible for OVMF.
Gotcha, this makes sense.
 
Because OVMF (the UEFI for VMs) doesn't looks a disks without this flag at all. This is an optimization to reduce boottime.
There were people whith >10 or even >100 disks, where OVMF required several seconds to minutes to scan, and process those all. No bootindex, not visible for OVMF.
So how to fix this problem? Change another ISO file?
 
So how to fix this problem?

If it's the bootindex, you need to ensure that the ISO, and the disks you want to boot from, are specified in the "boot order" option. That option can be changed in the Options panel of a VM in the Proxmox VE web-interface.
 
  • Like
Reactions: Indigo
If it's the bootindex ensure that the ISO and the disks you want to boot from are in the specified in the "boot order" option. Those can be changed in the Options panel of a VM in the Proxmox VE web-interface.
No, it's obviously not the index problem. The issue is described in the first post. Still no solution to it, even after latest upgrade with 5.3 kernel.
 
99% of the described by those symptoms are caused by the bootindex, so I wouldn't call it "non-obvious" to try ;)

Did you tried another backing bus, like SCSI (with virtio or non-virtio controller) instead of the mentioned SATA?
 
I am facing same issue in my cluster(Proxmox 6.1).

BdsDxe: failed to load Boot0001 "UEFI QEMU HARDDISK QM00013 from PciRoot(0x0)/Pci(0x1E,0x0)/Pci(0x1,0x0)/Pci(0x7,0x0)/Sata(0x0,0xFFFF,0x0)

The VM was working with the manual migration but when my cluster node was failed it is dropping on shell prompt after uefi boot error.
In my VM conf
EFI disk is attached.
OVMF is enabled.

How can I resolve this issue and assure that it will not behave again like that?
 
Some issue here :

BdsDxe: failed to load Boot0001 "UEFI QEMU HARDDISK QM00013 from PciRoot(0x0)/Pci(0x1E,0x0)/Pci(0x1,0x0)/Pci(0x7,0x0)/Sata(0x0,0xFFFF,0x0)
 
Same problem here. Is it a bug from OVMF implementation? Do you have updates? The last time a staff responded in dec 19 2019, and this thread is still first on google search so it might be really popular!

Hope someone will figure things out!
 
99% of the described by those symptoms are caused by the bootindex, so I wouldn't call it "non-obvious" to try ;)

Did you check this?? I.e., in the webinterface under options:
Screenshot_2020-05-02 nina - Proxmox Virtual Environment.png

Ensure that the respective virtual disk are selected in one of the boot devices.

As told multiple times, this is a optimization of OVMF, no point in checking disks not marked as boot devices, as some have tens or hundreds of disks, so booting would be slowed down quite a bit.
 
  • Like
Reactions: Mhaylis
Did you check this?? I.e., in the webinterface under options:
View attachment 16842

Ensure that the respective virtual disk are selected in one of the boot devices.

As told multiple times, this is a optimization of OVMF, no point in checking disks not marked as boot devices, as some have tens or hundreds of disks, so booting would be slowed down quite a bit.

I have the same problem, I follow step by step guide for windows 10 gpu passthrough https://youtu.be/fgx3NMk6F54?t=272 and after changing bios settings it fails to boot, if I revert to bios settings to SeaBIOS it boots again

it simpy can't see normal booting scsi0 drive in OVMF (UEFI) setting,
 

Attachments

  • chrome_8PM15AOXXA.png
    chrome_8PM15AOXXA.png
    27.7 KB · Views: 173
Did you check this?? I.e., in the webinterface under options:
View attachment 16842

Ensure that the respective virtual disk are selected in one of the boot devices.

As told multiple times, this is a optimization of OVMF, no point in checking disks not marked as boot devices, as some have tens or hundreds of disks, so booting would be slowed down quite a bit.

Thansk for your reply and the time you take to answer. I did every thing step by step from this post and tried some other things around about this issue. I already get to work with the boot order, taking off CD-ROM in my config to be sure that the VM is in the simpliest config. I tried with VirtiO SCSI, Virtio Block and SATA. I got the same problem every time. The edit of the boot order didn't make any differences.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!