[SOLVED] Windows VM (OVMF BIOS) stuck at 800x600 resolution


Active Member
Dec 29, 2018
Hi! I'm trying to recreate one of my windows templates which used the OVMF bios with a set resolution of 1920x1080. The problem I'm having with this new VM is after I set the resolution to 1920x1080 from within the bios it resets back to 800x600 on boot. If I check the bios the resolution says it's set to 1080p. However, if I reset the VM from the bios or reboot from within the VM the resolution then goes from 800x600 to 1920x1080. Any ideas on how I can get the resolution to stay on boot? Thanks!

PVE Versions:
proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-7
pve-kernel-5.3: 6.1-5
pve-kernel-4.15: 5.4-14
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-4.15.18-26-pve: 4.15.18-54
pve-kernel-4.15.18-9-pve: 4.15.18-30
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-22
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-6
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-5
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

VM Config:
agent: 1
bios: ovmf
bootdisk: scsi0
cores: 2
cpu: host
efidisk0: local-zfs:vm-112-disk-1,size=1M
machine: q35
memory: 4096
name: TEST2
net0: virtio=86:AC:04:1C:29:C8,bridge=vmbr0
numa: 1
ostype: win10
scsi0: local-zfs:vm-112-disk-0,cache=writeback,discard=on,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=ce85117b-a35e-42cc-8cfb-64352bd843d2
sockets: 2
vmgenid: f9bcf4e8-d529-46fc-bac3-5c14326750ef
there was a bug where efidisks on non-filebased storages would not be correctly included
the fix is currently only in git, the workaround is to put the efidisk on a filebased storage(e.g. as .raw or .qcow2 file), but not by moving disk, by removing and recreating and set your settings again
(this can include the guest os efi loader settings)
  • Like
Reactions: germanb
Thanks! I find out a bug in the bug tracker which matches my issue and I found the fix that you made for this. Any ETA for when this will hit the test repo or the stable one? I found somewhere that the fix was coming in version 6.1-12. I used to be able to create 128K efi disks with ZFS, although now it seems to limit them to a minimum of 1M
Last edited:
Any ETA for when this will hit the test repo or the stable one?
no eta, but probably soon

I used to be able to create 128K efi disks with ZFS, although now it seems to limit them to a minimum of 1M
zvols always had a minimum size of 1M, but we did not set it in the config right.. (the "size" parameter in the config is only informational )
Ah, gotcha. Thanks for info!

EDIT: For future reference, this has been fixed as of PVE 6.2
Last edited:


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!