SeaBIOS fails to boot with Display=None, and OVMF fails to read data from VirtiO disks?

sherrellbc

New Member
Jan 11, 2025
2
0
1
I have a VM using GPU passthrough - and it works fine. I can see the GPU inside the VM and interact with it generally. In an effort to use this GPU as my primary display, I need to _disable_ the implicit video device that Proxmox firmware supplies to the VM, so I changed `Hardware` > `Display` = `none` in the VM instance settings.

At this point the VM no longer boots. The device spins consuming ~12% CPU and about 100% memory.

Screenshot 2025-01-11 at 3.21.10 PM.png

This is obviously very bad - so I went through many gyrations of adjusting my GRUB and Linux boot options to attempt to figure out what (clearly) video related setting was causing trouble. I'm doing this headless since, recall, there is no display. But adding a virtual serial port and using `qm terminal` would yield nothing .. so that gave me an idea.

On a whim I tried OVMF firmware by changing `Hardware` > `BIOS` = `OVMF` (previously was `SeaBIOS`, the default). And to my surprise, the _behavior_ was the same as above (~12% CPU and ~100% memory) but the video output on a physical monitor actually worked. The firmware was running, but now it can't read my boot drive?

Screenshot 2025-01-11 at 3.27.13 PM.png

It clearly can read the _partition table_, but according to OVMF the contents of the drive are empty. This doesn't make much sense.

Can anyone give me insight into what's going on here? SeaBIOS boots only with a valid `Display`, OVMF seems to work with `Display=none` but fails for lack of finding any media on the boot disk. Are these known issues? Can OVMF not boot VirtIO disks?

The VM config itself is fairly vanilla.

Screenshot 2025-01-11 at 3.30.30 PM.png

Recap:

SeaBIOS
- Boots VM fine with valid `Display` setting
- Hangs with high resource usage when `Display=none`
- When `Display=none` during hang, the GPU outputs are not used (altogether not that surprising) so the screen is just black with no feedback regarding what's going on

OVMF
- Also hangs in boot with high resource usage (per summary page)
- Connected GPU outputs are used, showing the UEFI shell as shown above
- VirtIO boot disk reads back with zero file content, but UEFI shell seems to properly reflect the partition table layout in its BLK0,1,2,3 `map` output
- Setting `Display=none` or having a valid Display setting does not change this boot behavior; it fails the same way in either case

VM under test is just latest Debian Bookworm.
 
The memory use is normal. With pass-thru all memory assigned to the VM has to be pre-allocated in case the device does DMA to it. Don't know about the other problem but i440 is a really old chipset that doesn't support PCIe. Maybe q35 would work better?

ETA: Also, you can't just switch between BIOS and EFI booting. EFI requires a FAT-formatted disk to store it's boot loader(s). That won't exist if you installed the VM with BIOS boot.
 
Last edited:
I figured as much, regarding the memory. It was a common feature in either case so I wasn't too worried about it.

As for q35, I did try that and the disks still read back as empty as shown. And as far the EFI and legacy difference - I do understand. I was moreso just looking to debug the hanging SeaBIOS boot in general and just stumbled on the "disks are empty" from OVMF.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!