Can not boot a new created VM with OVMF(UEFI) under PVE 6.3-6

alex.tls

Member
Oct 17, 2020
35
0
11
40
Currently, my PVE is under bellow version:

Code:
pve-qemu-kvm   5.2.0-5      amd64        Full virtualization on x86 hardware
pve-manager/6.3-6/2184247e (running kernel: 5.4.106-1-pve)

I tried to Install windows 10 v20H2 , and tried to boot the new created VM that configure with OVMF(UEFI).
But It fail to boot ,from the vnc console , the cursor was freeze on the top left corner of the screen -- see the screenshot. , and never boot into the OS.

0405.PNG

bellow is some info from syslog, but it looks like there is a loop in this problem.

Code:
Apr 05 17:17:23 pve1 pveproxy[14040]: worker exit
Apr 05 17:17:23 pve1 pveproxy[3457]: worker 14040 finished
Apr 05 17:17:23 pve1 pveproxy[3457]: starting 1 worker(s)
Apr 05 17:17:23 pve1 pveproxy[3457]: worker 17945 started
Apr 05 17:17:52 pve1 pvedaemon[3449]: worker exit
Apr 05 17:17:52 pve1 pvedaemon[3447]: worker 3449 finished
Apr 05 17:17:52 pve1 pvedaemon[3447]: starting 1 worker(s)
Apr 05 17:17:52 pve1 pvedaemon[3447]: worker 18027 started

Bug ?
 
Post the output of:

> pveversion -v
> qm config yourVMID
 
Please see bellow:

> pveversion -v

Code:
root@pve1:~# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-8
pve-kernel-helper: 6.3-8
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.78-1-pve: 5.4.78-1
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.8
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-5
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.0.12-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-9
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-5
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-10
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1

> qm config yourVMID
Code:
root@pve1:~# qm config 402
balloon: 0
bios: ovmf
boot: order=ide2;virtio0;net0
cores: 2
efidisk0: wpool:vm-402-disk-0,size=1M
hotplug: disk,network,usb,memory,cpu
ide0: local:iso/virtio-win-0.1.189.iso,media=cdrom,size=488766K
ide2: local:iso/Windows20H2.iso,media=cdrom
machine: pc-i440fx-5.2
memory: 4096
name: WIN01
net0: virtio=A2:B4:28:E3:B8:EE,bridge=vmbr2,firewall=1
numa: 1
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=4a77770b-58e5-4ef7-a6ed-f6b43a5f1298
sockets: 2
virtio0: local-zfs:vm-402-disk-0,size=60G
vmgenid: 9a1d0d7a-3fe8-42f8-9208-d80fd1dc606b
 
I found it, under current version:

In OVMF(UEFI) mode , if I enable hotplug for cpu and memory , then the booting will fail.
In SeaBIOS mode, if I enable hotplug for cpu and memory , then the booting will success.
 
not the same issue but under the should under the same thread,
PVE doesn't show me correct RAM usage , the real usage is 1.4GB, but PVE show me 3.19GB.
I already make qemu-ga running inside my VM , and already enable Use QEMU Guest Agent in the VM configuration. see bellow screenshot.

0405003.PNG
 
not the same issue but under the should under the same thread,
PVE doesn't show me correct RAM usage , the real usage is 1.4GB, but PVE show me 3.19GB.
I already make qemu-ga running inside my VM , and already enable Use QEMU Guest Agent in the VM configuration. see bellow screenshot.

View attachment 25010
that looks simply like your balloon service isn't running...
 
Restart it probably. Or probably you are using an outdated virtio iso? Im using here 0.190 and it's working fine. 0.185 should work fine too.
But for the memory sync, there is just this ballooning service responsible for.
 
well, i don't know how that ballooning driver communicates with the host. probably through the serial interface too. but your serial interface looks like it's working, because pve shows the ip of the vm.
So yeah, I can't help any further. Maybe someone else can...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!