Unable to boot OVMF VM on Proxmox 6

Please post the VMs command line output by using:
qm showcmd VMID --pretty
just replace VMID with the one from the problematic VM and paste the output here in [code] content here [/code] tags.
 
Please post the VMs command line output by using:
qm showcmd VMID --pretty
just replace VMID with the one from the problematic VM and paste the output here in [code] content here [/code] tags.


Code:
root@main:~# qm showcmd 100 --pretty
/usr/bin/kvm \
  -id 100 \
  -name windows10 \
  -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' \
  -mon 'chardev=qmp,mode=control' \
  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
  -mon 'chardev=qmp-event,mode=control' \
  -pidfile /var/run/qemu-server/100.pid \
  -daemonize \
  -smbios 'type=1,uuid=c055b464-60bc-4987-a230-0f99f00c5343' \
  -drive 'if=pflash,unit=0,format=raw,readonly,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
  -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,file=/dev/VM/vm-100-disk-1' \
  -smp '8,sockets=1,cores=8,maxcpus=8' \
  -nodefaults \
  -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
  -vnc unix:/var/run/qemu-server/100.vnc,password \
  -no-hpet \
  -cpu 'host,+pcid,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer,hv_ipi,kvm=off' \
  -m 8192 \
  -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg \
  -device 'vmgenid,guid=1fdd7bce-dd85-4707-84f4-ea08690c0adc' \
  -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' \
  -device 'VGA,id=vga,bus=pcie.0,addr=0x1' \
  -chardev 'socket,path=/var/run/qemu-server/100.qga,server,nowait,id=qga0' \
  -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' \
  -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' \
  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' \
  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:582c4d969a3e' \
  -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
  -drive 'file=/dev/VM/vm-100-disk-0,if=none,id=drive-scsi0,cache=writeback,format=raw,aio=threads,detect-zeroes=on' \
  -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' \
  -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' \
  -device 'virtio-net-pci,mac=5A:CF:AE:F4:AF:1F,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' \
  -rtc 'driftfix=slew,base=localtime' \
  -machine 'type=q35+pve1' \
  -global 'kvm-pit.lost_tick_policy=discard'
 

guy, you made my day ! :cool:

here is my story, so that other users can benefit from it:
I'm using a XEON 1230v2, proxmox 6.2, GPU Passthrough, ACS hack and a Radeon 290x Vapor Tri - it's an old system, I know, but it doesn't eat bread and is up and running. VMs up and running: Win10 fully patched + latest Radeon driver, OMV with HDD Passthrough.

I tried to use ffmpeg with hardware acceleration, the host system (!) got a hard reset and crashed - so my OMV was left with the error described here - the EFI fix (post #23) repaired my OMV VM.

Garfield
 
guy, you made my day ! :cool:

I'm glad to hear that :)

If you have time, could you make my day too?

https://forum.proxmox.com/threads/nvidia-gpu-passtrough-issues.69204/

- In Windows 10, I have sound, but NVIDIA GT710 is banned by OS with code 43 error (rumor is: NVIDIA tilt driver install on hypervised systems)
- In Linux (Ubuntu, Linux Mint Debian) no sound (but under Ubuntu I see volume level pulsing) and no discrete GPU performing:
OpenGL vendor string: VMware, Inc.
OpenGL renderer string: llvmpipe (LLVM 9.0.1, 256 bits)

I've ordered a hdmi edid emulator plug (arriving 2-3 workdays), will it help? How do you use yours?

for example, here is my Ubuntu config:

Code:
root@pve:~# qm showcmd 104 --pretty
/usr/bin/kvm \
  -id 104 \
  -name Ubuntuvm \
  -chardev 'socket,id=qmp,path=/var/run/qemu-server/104.qmp,server,nowait' \
  -mon 'chardev=qmp,mode=control' \
  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
  -mon 'chardev=qmp-event,mode=control' \
  -pidfile /var/run/qemu-server/104.pid \
  -daemonize \
  -smbios 'type=1,uuid=f2e5a12a-30c2-4c38-87ce-727212b8bba7' \
  -drive 'if=pflash,unit=0,format=raw,readonly,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
  -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,file=/dev/bigdata/vm-104-disk-1' \
  -smp '4,sockets=1,cores=4,maxcpus=4' \
  -nodefaults \
  -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
  -vga none \
  -nographic \
  -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,kvm=off' \
  -m 6144 \
  -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg \
  -device 'vmgenid,guid=1a268e7e-c967-4acb-a44a-30fbd12fa7b9' \
  -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' \
  -device 'vfio-pci,host=0000:01:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on' \
  -device 'vfio-pci,host=0000:01:00.1,id=hostpci0.1,bus=ich9-pcie-port-1,addr=0x0.1' \
  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' \
  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:41e2ea4ea216' \
  -drive 'file=/dev/bigdata/vm-104-disk-0,if=none,id=drive-virtio0,cache=writeback,format=raw,aio=threads,detect-zeroes=on' \
  -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' \
  -netdev 'type=tap,id=net0,ifname=tap104i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' \
  -device 'virtio-net-pci,mac=0E:0E:B5:5F:2C:03,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' \
  -machine 'type=q35+pve0' \
  -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv-vendor-id=nvidia,kvm=off'
 
Code:
root@main:~# qm showcmd 100 --pretty
/usr/bin/kvm \
  -id 100 \
  -name windows10 \
  -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' \
  -mon 'chardev=qmp,mode=control' \
  -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \
  -mon 'chardev=qmp-event,mode=control' \
  -pidfile /var/run/qemu-server/100.pid \
  -daemonize \
  -smbios 'type=1,uuid=c055b464-60bc-4987-a230-0f99f00c5343' \
  -drive 'if=pflash,unit=0,format=raw,readonly,file=/usr/share/pve-edk2-firmware//OVMF_CODE.fd' \
  -drive 'if=pflash,unit=1,format=raw,id=drive-efidisk0,file=/dev/VM/vm-100-disk-1' \
  -smp '8,sockets=1,cores=8,maxcpus=8' \
  -nodefaults \
  -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \
  -vnc unix:/var/run/qemu-server/100.vnc,password \
  -no-hpet \
  -cpu 'host,+pcid,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,hv_synic,hv_stimer,hv_ipi,kvm=off' \
  -m 8192 \
  -readconfig /usr/share/qemu-server/pve-q35-4.0.cfg \
  -device 'vmgenid,guid=1fdd7bce-dd85-4707-84f4-ea08690c0adc' \
  -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' \
  -device 'VGA,id=vga,bus=pcie.0,addr=0x1' \
  -chardev 'socket,path=/var/run/qemu-server/100.qga,server,nowait,id=qga0' \
  -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' \
  -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' \
  -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' \
  -iscsi 'initiator-name=iqn.1993-08.org.debian:01:582c4d969a3e' \
  -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' \
  -drive 'file=/dev/VM/vm-100-disk-0,if=none,id=drive-scsi0,cache=writeback,format=raw,aio=threads,detect-zeroes=on' \
  -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' \
  -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' \
  -device 'virtio-net-pci,mac=5A:CF:AE:F4:AF:1F,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' \
  -rtc 'driftfix=slew,base=localtime' \
  -machine 'type=q35+pve1' \
  -global 'kvm-pit.lost_tick_policy=discard'

This could be possibly also get fixed through (re-)adding the boot menu entry, see:
https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries
 
  • Like
Reactions: rob4ik and maho
Same exact issue as described by OP. Went from working to broken with no changes to configs etc. It's causing chaos (as the vm was home assistant, and responsible for a whole lot of automations around the house). :(
 
Last edited:
I struggled with this issue for a few days.

As many describe here, I created an OVMF (q35) machine and tried installing Windows 10 on it by attaching a drive with the .iso. To no avail, I could only get to the screen that says:

Code:
BdsDxe: failed to load Boot0006 .....

The problem is on the .iso image; for some reason, although the drive is detected, the BIOS can't boot it, this happened for a couple of Windows images I got my hands on.

My solution was to create a bootable USB thumb drive that I attached to the machine via the Proxmox UI. Then, I entered the BIOS on the next boot and selected efi/boot/bootx64.efi as the file to boot from.
 
I finally found how to bypass the bug, very weird. The solution that works for me is to follow this steps https://pve.proxmox.com/wito the machineki/OVMF/UEFI_Boot_Entries then after renamed the boot option at "Input the description" > Commit Changes > Esc to return to the menu > Boot Manager
Here you should have your custom boot option present > press Enter then press Esc.
Esc should normally return on the Boot Maintenance Manager but this time it boots my Win10.
 
Last edited:
I'm hitting this issue now too. I thought I had fixed it by destroying and remaking the efi disk, but this apparently only works for 1 boot and then the problem manifests again. The *only* reliable workaround I've found is to manually change the machine type in the vm .conf from 'q53' to 'pc-q35-3.1' . When this change is made a new file appars on the OS drive's ESP called 'NvVars' and the boot process works as expected, reliably, for multiple shutdowns and reboots. This is not ideal though and shouldn't be necessary.

I think the 'ovmf optimisation' mentioned before is preventing all disks from being visible to the guest during early boot, which ofc breaks raid setups using physical disk passthrough. Is there a way to force-off this optimization?
 
I thought I had fixed it by destroying and remaking the efi disk, but this apparently only works for 1 boot and then the problem manifests again.

So, you did remove the EFI disk, added a new one, did a fresh boot of the VM and then followed
https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries ?

I think the 'ovmf optimisation' mentioned before is preventing all disks from being visible to the guest during early boot, which ofc breaks raid setups using physical disk passthrough. Is there a way to force-off this optimization?

Just set the respective disks in the boot order options? Then they get all the bootindex and show up.
 
So, you did remove the EFI disk, added a new one, did a fresh boot of the VM and then followed
https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries ?
Yes, several times to be sure. The entries created by efibootmgr under linux weren't 'sticking' apparently with 'q35' as machine type, as well as only 1 drive showing up in early efi boot and grub was unable to assemble the mdadm raid device to read its main grub.cfg - unless exited back out to the ovmf boot config menu then 'continue'ing to retry, which would then succeed and boot. This issue does not manifest when explicitly setting the machine type to pc-q35-3.1 or pc-q35-4.1. I even destroyed the VM and made a fresh one using q35 and attached the passed through physical disks after the fact to make sure there wasn't any other non-obvious state interfering with the q35 machine type setting.

Just set the respective disks in the boot order options? Then they get all the bootindex and show up.
The UI seems to only allow 1 disk to be added to the boot order, adding a second device in any other position causes the first to be removed and the cdrom and network options shuffled to account for the apparent removal of the first drive.

Edit: should note that the cluster in question is my home cluster running up-to-date no-sub repo 'Virtual Environment 6.2-11', our production sub'd clusters don't have any vms with multiple devices for boot disks but I could simulate this there if it'll help with diagnosis
 
Last edited:
The UI seems to only allow 1 disk to be added to the boot order, adding a second device in any other position causes the first to be removed and the cdrom and network options shuffled to account for the apparent removal of the first drive.

Ugh, yeah that seems like bad design which I misremembered, that should be adapted.
Can you please open an enhancement request over at https://bugzilla.proxmox.com/ linking this thread.

That the efi disk change does not persists over full reboot is still a bit weird, possibly an issue with the combination of disk pass-through.
 
Regarding the efi boot issue, I can reproduce this reliably on other proxmox clusters using normal disk images.

Make a new vm, debian 10 iso, 32gb disk, ovmf 'bios' and q35 machine type.
Add a second 32gb disk image.
Boot VM.
In the debian installer select 'manual' partitioning.
Create a 500mb ESP partition at the beginning of one of the drives
Create a 500mb unused partition at the beginning of the other drive
Create another partition on each drive covering the remainder of the free space. Select 'physical device for RAID' for the use-as option
Then select 'configure software raid' and create a new md device using the 2 for-RAID partitions created previously.
Finish partitioning and resume install (select 'no' for 'go back and add swap, for the sake of brevity).
Debian 10 will finish installing, at the end of the install process it will ask to reboot - this boot will succeed.
Once booted into the new environment, run 'efibootmgr -v -u' to see the debian boot entry. run 'grub-install' as root and 'update-grub2', run 'efibootmgr -v -u' again, debian entry still present and appears unchanged. Shut down the VM.
Now, when you power it on again, it will fail to boot, it will exit to the grub menu and only 1 drive will be visible to grub, preventing it from assembling the md device to continue loading config from its root fs.

type 'exit' at the grub prompt, 'continue' in the ovmf boot manager menu, grub starts again but this time it can see both drives and will boot as expected, however this won't persist across reboots or shutdowns. On the next reboot or shutdown it will drop out at the grub prompt again.
 
This is still an issue in 2022....the homeassistant VM will suddenly stop booting with this "failed to load boot001 uefi qemu not found" error. It's all related to the efi disk, and none of the suggestions here help (in particular the OVMF/UEFI Boot Entries article "Boot Maintenance Manager" -> "Boot Options" just results in a blank screen).

Edit: finally found a fix in this thread! https://forum.proxmox.com/threads/vm-efi-boot-corrupt.82922/post-401452
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!