[SOLVED] "Guest has not initialized the display (yet)" on new OVMF VMs after update to 7.0-13

jtbis

New Member
Nov 24, 2020
4
0
1
27
Code:
# pveversion --verbose
proxmox-ve: 7.0-2 (running kernel: 5.11.22-2-pve)
pve-manager: 7.0-13 (running version: 7.0-13/7aa7e488)
pve-kernel-helper: 7.1-2
pve-kernel-5.11: 7.0-8
pve-kernel-5.4: 6.4-1
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-2-pve: 5.11.22-4
pve-kernel-5.4.106-1-pve: 5.4.106-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve1
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.0-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-10
libpve-guest-common-perl: 4.0-2
libpve-http-server-perl: 4.0-3
libpve-storage-perl: 7.0-12
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.9-4
lxcfs: 4.0.8-pve2
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.0.11-1
proxmox-backup-file-restore: 2.0.11-1
proxmox-mini-journalreader: 1.2-1
proxmox-widget-toolkit: 3.3-6
pve-cluster: 7.0-3
pve-container: 4.1-1
pve-docs: 7.0-5
pve-edk2-firmware: 3.20210831-1
pve-firewall: 4.2-4
pve-firmware: 3.3-2
pve-ha-manager: 3.3-1
pve-i18n: 2.5-1
pve-qemu-kvm: 6.0.0-4
pve-xtermjs: 4.12.0-1
qemu-server: 7.0-16
smartmontools: 7.2-pve2
spiceterm: 3.2-2
vncterm: 1.7-1
zfsutils-linux: 2.0.5-pve

After the recent pve-manager update I am unable to get any video output in new OVMF VMs. VNCProxy appears to run successfully, but the only output is a black screen that says "Guest has not initialized the display (yet)". This happens regardless of OS and even when no ISO is loaded I can't get to the Proxmox UEFI screen. Change to SeaBios and I get normal video output.
 

Attachments

  • pve.PNG
    pve.PNG
    39.6 KB · Views: 38
I have the same problem, this prompt appears after creating a new ovmf vm.

But my OVMF VM on another proxmox ve running normally is running fine after importing to new host.
 
I have the same problem, this prompt appears after creating a new ovmf vm.

But my OVMF VM on another proxmox ve running normally is running fine after importing to new host.
What is the machine type of the VMs?
 
What is the machine type of the VMs?
old proxmox ve:
pveversion: 7.0-11 (I guessing this version, but it's definitely 7.0. Upgrade from 6.0 to 7.0.)
vm machine type: i440fx
bios: OVMF

new proxmox ve:
pveversion: 7.0-13
vm machine type: i44fx (Just importing from the old host to the new host without making changes)
bios: OVMF


This is what I use, but a new host that creates a new OVMF VM and uses the i440fx machine type will not start, suggesting the situation in this post.
 
Last edited:
Hi,
thanks for the report! There already is a patch to fix this on the mailing list (assuming it's the same issue). Until that finds its way into the packages, it should be possible to switch to machine type q35 to work around the issue.
Please also check the link. It's currently broken for machine type i1440fx.
 
Sorry, changing the machine type to q35 does not work for me. I still get the "Guest has not initialized the display (yet)".

Trying to install Windows 11.

Any other workarounds available?

Cheers,
Boris

Code:
root@pve:~# pveversion
pve-manager/7.0-13/7aa7e488 (running kernel: 5.11.22-4-pve)

Code:
root@pve:~# qm config 111
agent: 1
audio0: device=ich9-intel-hda,driver=spice
balloon: 4096
bios: ovmf
boot: order=virtio0;sata0;sata1
cores: 8
efidisk0: local-zfs:vm-111-disk-2,efitype=4m,pre-enrolled-keys=1,size=1M
machine: pc-q35-6.0
memory: 8192
name: vbox11-Win11-Devel
net0: virtio=D2:C1:82:BB:A1:AE,bridge=vmbr0,firewall=1
net1: virtio=62:33:55:5D:0E:70,bridge=vmbr1,firewall=1
numa: 0
ostype: win10
rng0: source=/dev/urandom
sata0: local:iso/Win11_German_x64.iso,media=cdrom,size=5383520K
sata1: local:iso/virtio-win-0.1.204.iso,media=cdrom,size=543272K
scsihw: virtio-scsi-single
smbios1: uuid=8b4ef216-cc0c-4b99-b8cc-8675c57c0d13
sockets: 1
spice_enhancements: foldersharing=1,videostreaming=filter
tpmstate0: local-zfs:vm-111-disk-1,size=4M,version=v2.0
unused0: local-zfs:vm-111-disk-3
usb0: spice,usb3=1
vga: qxl,memory=256
virtio0: local-zfs:vm-111-disk-0,cache=writeback,discard=on,iothread=1,size=128G
vmgenid: 21b7b914-27c1-4b01-a053-f9739f6a1db2
 
Sorry, changing the machine type to q35 does not work for me. I still get the "Guest has not initialized the display (yet)".
While trying this out with another VM for Windows 10 I noticed that it worked. I therefore removed the previously created VM and recreated it from scratch with same hardware and options. Now it boots. Seems that either the efidisk or tpm held some memory that prevented the system from booting.

So please disregard my previous post.
 
It's me again.

Yesterday I was able to fully install Windows 11 including virt drivers and SPICE drivers and update the Windows 11 installation, even rebooted several times. I left the machine over night doing another update. This morning, the VM does not fire up - it was stuck with the "Guest has not initialized the display (yet)" again. There must be more than the issue of the nonfunctioning i440fx virtual hardware.

Code:
agent: 1
audio0: device=ich9-intel-hda,driver=spice
balloon: 4096
bios: ovmf
boot: order=virtio0;sata1;sata2
cores: 8
efidisk0: local-zfs:vm-111-disk-1,efitype=4m,pre-enrolled-keys=1,size=1M
machine: pc-q35-6.0
memory: 8192
name: vbox11-Win11-Devel
net0: virtio=FA:58:7E:49:64:8A,bridge=vmbr0,firewall=1
net1: virtio=36:B1:C7:0B:9B:3D,bridge=vmbr1,firewall=1
numa: 0
ostype: win10
rng0: source=/dev/urandom
sata1: local:iso/Win11_German_x64.iso,media=cdrom,size=5383520K
sata2: local:iso/virtio-win-0.1.204.iso,media=cdrom,size=543272K
scsihw: virtio-scsi-single
smbios1: uuid=6852ca1d-d9f6-4df0-98ca-c5a32645ac0e
sockets: 1
tpmstate0: local-zfs:vm-111-disk-2,size=4M,version=v2.0
usb0: spice,usb3=1
vga: qxl,memory=256
virtio0: local-zfs:vm-111-disk-0,cache=writeback,discard=on,iothread=1,size=128G
vmgenid: 6e69590f-b489-4e18-9efc-5d847c26ad44
 
Hi,
It's me again.

Yesterday I was able to fully install Windows 11 including virt drivers and SPICE drivers and update the Windows 11 installation, even rebooted several times. I left the machine over night doing another update. This morning, the VM does not fire up - it was stuck with the "Guest has not initialized the display (yet)" again. There must be more than the issue of the nonfunctioning i440fx virtual hardware.

Code:
agent: 1
audio0: device=ich9-intel-hda,driver=spice
balloon: 4096
bios: ovmf
boot: order=virtio0;sata1;sata2
cores: 8
efidisk0: local-zfs:vm-111-disk-1,efitype=4m,pre-enrolled-keys=1,size=1M
machine: pc-q35-6.0
memory: 8192
name: vbox11-Win11-Devel
net0: virtio=FA:58:7E:49:64:8A,bridge=vmbr0,firewall=1
net1: virtio=36:B1:C7:0B:9B:3D,bridge=vmbr1,firewall=1
numa: 0
ostype: win10
rng0: source=/dev/urandom
sata1: local:iso/Win11_German_x64.iso,media=cdrom,size=5383520K
sata2: local:iso/virtio-win-0.1.204.iso,media=cdrom,size=543272K
scsihw: virtio-scsi-single
smbios1: uuid=6852ca1d-d9f6-4df0-98ca-c5a32645ac0e
sockets: 1
tpmstate0: local-zfs:vm-111-disk-2,size=4M,version=v2.0
usb0: spice,usb3=1
vga: qxl,memory=256
virtio0: local-zfs:vm-111-disk-0,cache=writeback,discard=on,iothread=1,size=128G
vmgenid: 6e69590f-b489-4e18-9efc-5d847c26ad44
sorry for the late response. I think I can reproduce the issue. Could you try using less memory for the display (128 works for me) as a workaround?
 
Thanks, Fabian, for the reply. Your reply and my activities have crossed.

Yesterday, I saw that the fix for the i440fx issue was available and so I did a full upgrade of the proxmox server. Since my Windows 11 machine still did not work neither with q35 nor with i440fx, I threw it away and started a i440fx machine from scratch, with a virtio-gpu driver (256 MB), though. This worked, survived updates, driver installation and several reboots up to now. Thus, I can confirm that the original issue of failing installations on i440fx/UEFI VMs is solved.

For further exploration, I switched to SPICE (256 MB) driver. This one still results in a "Guest has not initialized the display (yet)" on boot. So there is truly a second issue here.

I returned to virtio-gpu driver (256 MB) which is sufficient to me.
 
I am still experiencing issues with "default VGA" and "OVMF". The VM is running OPNsense and refuses to reboot when the reboot is issued from within the VM, stopping the VM is still working. Rebooting using proxmox is working fine.
 
  • Like
Reactions: TonyB-Elmi
I'm confronting similar issue. I was able to identify that if I remove the EFI Disk the boot process continue. If I add it again (to confirm relation) the issue came back.


Code:
Header
Proxmox
Virtual Environment 7.1-8
Search
Node 'GUYNPR1-PVE-NAF1'
Hour (average)
 CPU usage 5.05% of 32 CPU(s)
 IO delay 0.02%
 Load average 1.61,1.53,1.47
 RAM usage 30.68% (38.51 GiB of 125.53 GiB)
KSM sharing 0 B
 / HD space 20.74% (5.22 GiB of 25.16 GiB)
 SWAP usage N/A
0%
CPU(s) 32 x Intel(R) Xeon(R) Silver 4110 CPU @ 2.10GHz (2 Sockets)
Kernel Version Linux 5.13.19-2-pve #1 SMP PVE 5.13.19-4 (Mon, 29 Nov 2021 12:10:09 +0100)
PVE Manager Version pve-manager/7.1-8/5b267f33
Repository Status Proxmox VE updates Non production-ready repository enabled!
Server View
Logs
()
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-8 (running version: 7.1-8/5b267f33)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.11: 7.0-10
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.13.19-1-pve: 5.13.19-3
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.4.27-1-pve: 5.4.27-1
ceph: 16.2.7
ceph-fuse: 16.2.7
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: not correctly installed
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-network-perl: 0.6.2
libpve-storage-perl: 7.0-15
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-1
openvswitch-switch: not correctly installed
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-3
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-4
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3

Code:
agent: 1
balloon: 2048
bios: ovmf
boot: order=sata0;scsi0;sata1;net0
cores: 2
efidisk0: DualTolrnc_cPool:vm-2011-disk-1,efitype=4m,pre-enrolled-keys=1,size=1M
hotplug: disk,network,usb,memory,cpu
machine: pc-q35-6.1
memory: 16384
meta: creation-qemu=6.1.0,ctime=1642005277
name: GUYNPR1-MSWDC-NAF1
net0: virtio=AE:43:F8:CA:48:78,bridge=vmbr101,tag=11
numa: 1
onboot: 1
ostype: win11
sata0: Storage_fsPool:iso/Windows2022Server_Eval.iso,media=cdrom,size=5420734K
sata1: Storage_fsPool:iso/virtio-win_0.1.208.iso,media=cdrom,size=543390K
scsi0: DualTolrnc_cPool:vm-2011-disk-0,discard=on,iothread=1,size=32G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=ddf6f21b-2dc5-4b19-96be-5c884a72165a
sockets: 2
tpmstate0: DualTolrnc_cPool:vm-2011-disk-2,size=4M,version=v2.0
vmgenid: 72308f60-ca96-45ab-ac55-9e831667487c

Also I'm confuse why is mark a Solved, should we open a new thread?
 
Last edited:
  • Like
Reactions: TheHellSite
Well review the UEFI docs (https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_virtual_machines_settings) Sec. 10.2.10, I found something interesting.
When using OVMF with a virtual display (without VGA passthrough), you need to set the client resolution in the OVMF menu (which you can reach with a press of the ESC button during boot), or you have to choose SPICE as the display type.
I did that and start the VM and that make it work. After that I change it back to default and the VM continue to reboot properly. Therefore I was not able to test the ESC button during boot procedure to set a client resolution, and that could fix the issue (I don't think that would work base on the behavior but...).

May be someone could assist on that test...

I review the procedure and identify that I did and additional thing that I was able to reproduce that couse the issue. I have enable Memory and CPU under Options/Hotplug, and those or one of them, couse the issue. After I remove the Check mark, the VM boot properly.

Really Sorry to make multiple changes at ones! I will try to later to test step by step to pin point the issue, maybe is one of those already mention or the combination...
 
Last edited:
  • Like
Reactions: Dpontel
Hi,
Also I'm confuse why is mark a Solved, should we open a new thread?
I suppose because the thread author marked it as such. And the original problem should be solved since quite a bit. Opening a new thread is preferred if you're not sure it's the same issue.

Well review the UEFI docs (https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_virtual_machines_settings) Sec. 10.2.10, I found something interesting.

I did that and start the VM and that make it work. After that I change it back to default and the VM continue to reboot properly. Therefore I was not able to test the ESC button during boot procedure to set a client resolution, and that could fix the issue (I don't think that would work base on the behavior but...).

May be someone could assist on that test...

I review the procedure and identify that I did and additional thing that I was able to reproduce that couse the issue. I have enable Memory and CPU under Options/Hotplug, and those or one of them, couse the issue. After I remove the Check mark, the VM boot properly.
yes, I can reproduce the issue when memory hotplug and NUMA (which is required for it) are enabled. NUMA alone works. This time, changing machine type to i440fx (rather than the other way around) seems to work around the issue. I created a bug report for it.
Really Sorry to make multiple changes at ones! I will try to later to test step by step to pin point the issue, maybe is one of those already mention or the combination...
 
  • Like
Reactions: Tekuno-Kage
yes, I can reproduce the issue when memory hotplug and NUMA (which is required for it) are enabled. NUMA alone works. This time, changing machine type to i440fx (rather than the other way around) seems to work around the issue. I created a bug report for it.
Thanks you very much!
 
yes, I can reproduce the issue when memory hotplug and NUMA (which is required for it) are enabled. NUMA alone works. This time, changing machine type to i440fx (rather than the other way around) seems to work around the issue. I created a bug report for it.

I just tried various different workarounds mentioned in this thread.

- I switched machine types back and forth. (i440fx - q35)
- Changed the hotplug settings.
- Changed the graphics device and graphics memory.
- Enabled / disabled NUMA.
- Removed the EFI disk (only removed it, didn't delete it)

But nothing worked. The machine always gets stuck on reboot when issued from within the VM.
Proxmox commands worked fine.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!