Hi everyone — I’m trying to build a clean Ubuntu 22.04 cloud-init template on a Proxmox cluster, using the official Ubuntu cloud image. I’m running into what seems like a firmware/image/boot-loader mismatch and would appreciate insight from folks who have successfully booted jammy-server-cloudimg-amd64.img under OVMF.
Attached the imported disk as:
The problem occurs before cloud-init / networking becomes relevant.
“Please select boot device”
No “Boot from file” option appears there.
I tried the common “TAB then arrows” approach to switch filesystems; it never displayed any FS0/FS1 entries. The explorer remains blank.
So I cannot manually browse to something like EFI/ubuntu/grubx64.efi because no filesystem is visible.
This makes me suspect the image is “UEFI-only” (in practice), but OVMF is failing to enumerate the disk/ESP.
Thanks in advance!
Environment
- Proxmox version / kernel: Linux pve-ms01 6.17.2-1-pve #1 SMP PREEMPT_DYNAMIC PMX 6.17.2-1 (2025-10-21T11:55Z) x86_64
- Cluster: 3-node cluster (MS-01 + 2× UM790), quorate
- Storage:
- local-lvm (LVM-thin) for VM disks and EFI disk
- local (Directory /var/lib/vz) for ISOs, etc.
- additional directory storage cloudinit (Directory) created for cloud-init metadata/snippets (to keep cloud-init seed storage separate)
- Networking: VM has net0 on vmbr0 (LAN) and later net1 on vmbr1 (data VLAN), but the boot issue happens even before network configuration matters.
Goal
Create a working Ubuntu 22.04 template using the Ubuntu cloud image and Proxmox cloud-init (no ISO install), then clone 3 k8s nodes from it.Image
Downloaded from Ubuntu cloud images:- jammy-server-cloudimg-amd64.img (Ubuntu 22.04 “Jammy” cloud image)
What I did (steps)
1) Create a “shell VM” (VM 9000)
Created VM 9000 with:- BIOS: OVMF (UEFI)
- Machine: q35
- SCSI controller: VirtIO SCSI single
- EFI storage: local-lvm
- No ISO
- net0: virtio on vmbr0
Bash:
qm importdisk 9000 /var/lib/vz/template/iso/jammy-server-cloudimg-amd64.img local-lvm
Attached the imported disk as:
- scsi0: local-lvm:vm-9000-disk-0 (the imported image)
- added CloudInit Drive on ide2 backed by the cloudinit directory storage
- enabled QEMU Guest Agent
- set boot order to scsi0 only
- converted VM 9000 to a template
- cloned it to create k8s-node-1
2) Verify VM config
Here is the relevant VM config for k8s-node-1 (VMID 100), which appears correct:
Bash:
agent: 1
bios: ovmf
boot: order=scsi0;net0
cpu: host
machine: q35 (from the UI)
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,ms-cert=2023,pre-enrolled-keys=1,size=4M
scsihw: virtio-scsi-single
scsi0: local-lvm:vm-100-disk-1,cache=writeback,discard=on,iothread=1,size=32G
ide2: cloudinit:100/vm-100-cloudinit.qcow2,media=cdrom
net0: virtio=...,bridge=vmbr0
net1: virtio=...,bridge=vmbr1
ciuser: ubuntu
ipconfig0: ip=192.168.50.40/24,gw=192.168.50.1
ipconfig1: ip=10.50.0.30/24
nameserver: 192.168.50.1
sshkeys: ssh-ed25519 ...
The problem occurs before cloud-init / networking becomes relevant.
Boot failure symptoms under OVMF (UEFI)
When starting k8s-node-1, the VM fails with:- failed to load Boot0002 "UEFI QEMU QEMU HARDDISK" ... : Not Found
- BdsDxe: No bootable option or device was found.
“Please select boot device”
- UEFI QEMU QEMU HARDDISK
- EFI Firmware Setup
- UEFI QEMU DVD-ROM QM00003
Boot Manager doesn’t expose “Boot from file”
Inside firmware, “Boot Manager” only shows:- UEFI QEMU QEMU HARDDISK
- UEFI QEMU DVD-ROM QM00003
No “Boot from file” option appears there.
Boot Maintenance Manager → Add Boot Option → File Explorer is empty
I can reach:- EFI Firmware Setup → Boot Maintenance Manager → Boot Options → Add Boot Option
I tried the common “TAB then arrows” approach to switch filesystems; it never displayed any FS0/FS1 entries. The explorer remains blank.
So I cannot manually browse to something like EFI/ubuntu/grubx64.efi because no filesystem is visible.
Attempted workaround: switch to SeaBIOS
Given the difficulty with OVMF, I tried switching BIOS to SeaBIOS (and i440fx) as a workaround. However, SeaBIOS also does not boot this imported disk (still non-bootable).This makes me suspect the image is “UEFI-only” (in practice), but OVMF is failing to enumerate the disk/ESP.
Questions for the community
- Have others successfully booted jammy-server-cloudimg-amd64.img on Proxmox with OVMF/q35?
- If yes, what exact VM settings are required (BIOS, machine type, disk bus, EFI disk options, secure boot on/off, etc.)?
- Why would OVMF’s File Explorer show no filesystems (no FS0/FS1) when a bootable disk is present as scsi0?
- Is this indicative of missing EFI System Partition, incompatible partition layout, or something about VirtIO SCSI + q35?
- Is there a recommended, Proxmox-native “best practice” for Ubuntu cloud images (Jammy) regarding:
- VirtIO SCSI vs VirtIO block
- OVMF vs SeaBIOS
- whether an explicit EFI disk should be used or avoided
- any requirement to convert the image format or run qemu-img conversions first?
- Are there known issues with specific OVMF builds where the file explorer doesn’t enumerate filesystems?
Thanks in advance!