VM not starting with 512MB RAM

SergeyMorozov

Member
Sep 21, 2020
17
8
8
42
After one of the recent updates Windows VM is stopped working if the memory size is 512MB and host CPU is selected.
Снимок экрана 2023-03-14 222759.png
Снимок экрана 2023-03-14 222843.png
If I set a kvm64 CPU or 1Gb RAM VM is starting normally.
Host CPU: Ryzen 5 3600
 

Attachments

  • Снимок экрана 2023-03-14 222124.png
    Снимок экрана 2023-03-14 222124.png
    55.2 KB · Views: 6
  • Снимок экрана 2023-03-14 222251.png
    Снимок экрана 2023-03-14 222251.png
    119.2 KB · Views: 6
Hi,
please share the output of pveversion -v and qm config 504. In /var/log/apt/history.log, you can see what packages got updated. Please try to downgrade pve-qemu-kvm to the version before the upgrade apt install pve-qemu-kvm=w.x.y-z and see if it works with that. If it is a regression, I'll see if I can reproduce it and debug it further.
 
Hi,
please share the output of pveversion -v and qm config 504. In /var/log/apt/history.log, you can see what packages got updated. Please try to downgrade pve-qemu-kvm to the version before the upgrade apt install pve-qemu-kvm=w.x.y-z and see if it works with that. If it is a regression, I'll see if I can reproduce it and debug it further.
pveversion -v
Code:
proxmox-ve: 7.3-1 (running kernel: 5.15.85-1-pve)
pve-manager: 7.3-6 (running version: 7.3-6/723bb6ec)
pve-kernel-helper: 7.3-7
pve-kernel-5.15: 7.3-2
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.85-1-pve: 5.15.85-1
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.3-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-2
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-6
libpve-storage-perl: 7.3-2
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
openvswitch-switch: 2.15.0+ds1-2+deb11u2.1
proxmox-backup-client: 2.3.3-1
proxmox-backup-file-restore: 2.3.3-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.5.5
pve-cluster: 7.3-2
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20221111-1
pve-firewall: 4.2-7
pve-firmware: 3.6-4
pve-ha-manager: 3.5.1
pve-i18n: 2.8-3
pve-qemu-kvm: 7.2.0-7
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1
qm config 504
Code:
bios: ovmf
boot: order=ide0;ide2;net0
cores: 1
cpu: host
efidisk0: local-zfs:vm-504-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
ide0: local-zfs:vm-504-disk-1,discard=on,size=32G
ide2: local:iso/en_windows_server_version_20h2_updated_jan_2021_x64_dvd_33fa1034.iso,media=cdrom,size=3594270K
machine: pc-q35-7.2
memory: 512
meta: creation-qemu=7.2.0,ctime=1678867641
name: test
net0: e1000=36:12:9D:FC:62:1C,bridge=vmbr0,firewall=1
numa: 0
ostype: win11
scsihw: virtio-scsi-single
smbios1: uuid=3fada92f-648a-44d3-8f10-044f29975789
sockets: 1
tpmstate0: local-zfs:vm-504-disk-2,size=4M,version=v2.0
vmgenid: 35c92dc9-9cc9-4dd3-8461-7969e3962e9e
The problem is gone after downgrading a pve-edk2-firmware to version 3.20220526-1.
 
  • Like
Reactions: fiona and leesteken
I can reproduce the issue (i.e. stuck at Guest has not initialized the display (yet)) with an Ubuntu VM with 512 MiB RAM and CPU type host (boots fine with kvm64). Will try and find out what changed in pve-edk2-firmware. Maybe it's a bug or maybe they simply require more memory now for some, hopefully good, reason.
 
@leesteken Are you also using CPU type host?
Two VMs (with PCIe passthrough) use host (Ryzen 5950X) and they only fail with "out of memory" when enabling memory hotplug. Changing host to kvm64 does make memory hotplug work again. Also using EPYC or EPYC-IBPB works, but EPYC-Rome does not (and my CPU does not support EPYC-Milan).
One VM (with PCIe passthrough) uses EPYC-Rome and does not use memory hotplug. Changing to kvm64 (or EPYC) does not help here, it just need a little more memory.

Maybe the slightly more memory issue is different from the hotplug problem on EPYC-Rome and above? The out of memory always gets fixed by downgrading pve-edk2-firmware.

EDIT: A nested Proxmox with memory hotplug enabled (without PCIe passthrough) does work fine with EPYC-Rome.
 
Last edited:
  • Like
Reactions: fiona
Bisecting yielded the following commit as the first bad one: https://github.com/tianocore/edk2/commit/bbda386d25e5316445a9bd67c45b47ce248eeb25
AFAIU, the old method just guessed the PhysMemAddressWidth from the amount of assigned memory. Now that there is better detection, a larger value is used. And that seems to lead to requiring more memory. I sent a mail to upstream asking if that is expected and if there is a workaround.
 
Now that there is better detection, a larger value is used. And that seems to lead to requiring more memory. I sent a mail to upstream asking if that is expected and if there is a workaround.
Thank you for chasing this down. It makes sense that it takes memory to manage (potentially a lot of) memory. It would be nice to be able to set a fixed (not hotplugged) minimum and a maximum (hotpluggable) memory size :cool:.
EDIT: This might explain the dependency on the virtual CPU type as well: EPYC-Rome might support memory thant EPYC-IBPB. But weirdly only when using PCIe passthrough?
 
Last edited:
Thank you for chasing this down. It makes sense that it takes memory to manage (potentially a lot of) memory. It would be nice to be able to set a fixed (not hotplugged) minimum and a maximum (hotpluggable) memory size :cool:.
The issue happens even without hotplug, so that unfortunately doesn't help. OVMF would need to know about that limit and respect the information for the page tables.
EDIT: but it's unlikely that will happen, I mean, they switched away from using (assigned) memory to guess the address width ;)
Regardless, limiting the maximum hotplug memory is a planned feature (patch on mailing list).

We'll have to wait on what upstream has to say, but it might be necessary to assign more memory to such VMs in the future (or not use OVMF BIOS or use kvm64 CPU type). Link to discussion: https://edk2.groups.io/g/devel/topic/94113631#101449
 
Last edited:
  • Like
Reactions: leesteken
The issue happens even without hotplug, so that unfortunately doesn't help. OVMF would need to know about that limit and respect the information for the page tables.
In my case (VMs with PCIe passthrough, otherwise I would not need memory hotplug) using EPYC(-IBPB) or disabling hotplug works as a work-around. Giving them more (than 12GB of) memory does not appear to help. A little more memory does help for VMs that had minimal memory assigned.
We'll have to wait on what upstream has to say, but it might be necessary to assign more memory to such VMs in the future (or not use OVMF BIOS or use kvm64 CPU type). Link to discussion: https://edk2.groups.io/g/devel/topic/94113631#101449
Thanks. I'll wait an see. Only the combination of OVMF with PCIe passthrough and memory hotplug and EPYC-Rome (or above/max/host) does not work for me. Changing either of those four things is a work-around currently.

EDIT: With the latest update of pve-edk2-firmware to 3.20230228-1 today, EPYC-Rome works again but host still gives out of memory.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!