Ubuntu Jammy cloud image won’t boot under OVMF in Proxmox (File Explorer empty / no FS0/FS1; SeaBIOS also fails) — seeking insight

ahetman

New Member
Feb 8, 2026
2
1
1
Hi everyone — I’m trying to build a clean Ubuntu 22.04 cloud-init template on a Proxmox cluster, using the official Ubuntu cloud image. I’m running into what seems like a firmware/image/boot-loader mismatch and would appreciate insight from folks who have successfully booted jammy-server-cloudimg-amd64.img under OVMF.

Environment

  • Proxmox version / kernel: Linux pve-ms01 6.17.2-1-pve #1 SMP PREEMPT_DYNAMIC PMX 6.17.2-1 (2025-10-21T11:55Z) x86_64
  • Cluster: 3-node cluster (MS-01 + 2× UM790), quorate
  • Storage:
    • local-lvm (LVM-thin) for VM disks and EFI disk
    • local (Directory /var/lib/vz) for ISOs, etc.
    • additional directory storage cloudinit (Directory) created for cloud-init metadata/snippets (to keep cloud-init seed storage separate)
  • Networking: VM has net0 on vmbr0 (LAN) and later net1 on vmbr1 (data VLAN), but the boot issue happens even before network configuration matters.

Goal

Create a working Ubuntu 22.04 template using the Ubuntu cloud image and Proxmox cloud-init (no ISO install), then clone 3 k8s nodes from it.

Image

Downloaded from Ubuntu cloud images:
  • jammy-server-cloudimg-amd64.img (Ubuntu 22.04 “Jammy” cloud image)
I downloaded it to the Proxmox host and imported it with qm importdisk.

What I did (steps)

1) Create a “shell VM” (VM 9000)

Created VM 9000 with:
  • BIOS: OVMF (UEFI)
  • Machine: q35
  • SCSI controller: VirtIO SCSI single
  • EFI storage: local-lvm
  • No ISO
  • net0: virtio on vmbr0
Then imported the cloud image:
Bash:
qm importdisk 9000 /var/lib/vz/template/iso/jammy-server-cloudimg-amd64.img local-lvm

Attached the imported disk as:
  • scsi0: local-lvm:vm-9000-disk-0 (the imported image)
Removed the CD/DVD device so ide slots are available, then:
  • added CloudInit Drive on ide2 backed by the cloudinit directory storage
  • enabled QEMU Guest Agent
  • set boot order to scsi0 only
  • converted VM 9000 to a template
  • cloned it to create k8s-node-1

2) Verify VM config

Here is the relevant VM config for k8s-node-1 (VMID 100), which appears correct:
Bash:
agent: 1
bios: ovmf
boot: order=scsi0;net0
cpu: host
machine: q35 (from the UI)
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,ms-cert=2023,pre-enrolled-keys=1,size=4M
scsihw: virtio-scsi-single
scsi0: local-lvm:vm-100-disk-1,cache=writeback,discard=on,iothread=1,size=32G
ide2: cloudinit:100/vm-100-cloudinit.qcow2,media=cdrom
net0: virtio=...,bridge=vmbr0
net1: virtio=...,bridge=vmbr1
ciuser: ubuntu
ipconfig0: ip=192.168.50.40/24,gw=192.168.50.1
ipconfig1: ip=10.50.0.30/24
nameserver: 192.168.50.1
sshkeys: ssh-ed25519 ...

The problem occurs before cloud-init / networking becomes relevant.

Boot failure symptoms under OVMF (UEFI)

When starting k8s-node-1, the VM fails with:
  • failed to load Boot0002 "UEFI QEMU QEMU HARDDISK" ... : Not Found
  • BdsDxe: No bootable option or device was found.
If I get the boot menu, I see:

“Please select boot device”
  1. UEFI QEMU QEMU HARDDISK
  2. EFI Firmware Setup
  3. UEFI QEMU DVD-ROM QM00003
Choosing the harddisk loops back into boot failure.

Boot Manager doesn’t expose “Boot from file”

Inside firmware, “Boot Manager” only shows:
  1. UEFI QEMU QEMU HARDDISK
  2. UEFI QEMU DVD-ROM QM00003

No “Boot from file” option appears there.

Boot Maintenance Manager → Add Boot Option → File Explorer is empty

I can reach:
  • EFI Firmware Setup → Boot Maintenance Manager → Boot Options → Add Boot Option
But the File Explorer view is completely empty. It only shows esc=exit at the bottom, with no filesystem entries (no FS0/FS1) and no ability to browse a disk.

I tried the common “TAB then arrows” approach to switch filesystems; it never displayed any FS0/FS1 entries. The explorer remains blank.

So I cannot manually browse to something like EFI/ubuntu/grubx64.efi because no filesystem is visible.

Attempted workaround: switch to SeaBIOS

Given the difficulty with OVMF, I tried switching BIOS to SeaBIOS (and i440fx) as a workaround. However, SeaBIOS also does not boot this imported disk (still non-bootable).

This makes me suspect the image is “UEFI-only” (in practice), but OVMF is failing to enumerate the disk/ESP.

Questions for the community

  1. Have others successfully booted jammy-server-cloudimg-amd64.img on Proxmox with OVMF/q35?
    • If yes, what exact VM settings are required (BIOS, machine type, disk bus, EFI disk options, secure boot on/off, etc.)?
  2. Why would OVMF’s File Explorer show no filesystems (no FS0/FS1) when a bootable disk is present as scsi0?
    • Is this indicative of missing EFI System Partition, incompatible partition layout, or something about VirtIO SCSI + q35?
  3. Is there a recommended, Proxmox-native “best practice” for Ubuntu cloud images (Jammy) regarding:
    • VirtIO SCSI vs VirtIO block
    • OVMF vs SeaBIOS
    • whether an explicit EFI disk should be used or avoided
    • any requirement to convert the image format or run qemu-img conversions first?
  4. Are there known issues with specific OVMF builds where the file explorer doesn’t enumerate filesystems?
Any pointers to working configurations, known bugs, or recommended alternative images (or steps like ensuring an ESP exists / using a different image variant) would be hugely appreciated.


Thanks in advance!
 
The disk even so named as img is actually a qcow2-file. I assume it is treated as raw-file in your import command, which then gets converted to qcow2. Just rename file ending to qcow2 before import and it should work, at least it did in my quick test.
 
The disk even so named as img is actually a qcow2-file. I assume it is treated as raw-file in your import command, which then gets converted to qcow2. Just rename file ending to qcow2 before import and it should work, at least it did in my quick test.


Thank you so much @fba this was exactly the issue!

You were spot on that the Ubuntu .img cloud image is actually qcow2, and that importing it as raw caused Proxmox to mis-handle the disk during conversion. Renaming the file to .qcow2 before running qm importdisk completely resolved the problem.

After re-importing the image with the correct extension, the VM booted immediately under OVMF/q35 with no EFI errors, and the filesystem was properly enumerated. This also explains why OVMF’s File Explorer was completely empty and why SeaBIOS failed as well; the disk layout was effectively corrupted by the double conversion.

Really appreciate you taking the time to test this and point it out. It saved us a lot of head-scratching and firmware debugging. Hopefully this helps the next person who runs into the same issue.
 
  • Like
Reactions: fba