Lower ram usage than expected even if vms have balloning disabled

Fantu

Member
Jan 23, 2024
72
20
8
Hi, on latest proxmox server updated to latest version I added 2 vm today and I spotted that even if balloning is disabled and should assign max ram set in the vm the usage is less that expected.
Are 2 vm running, one with 8gb of ram and one 12gb of ram and balloning disabled on both but on host I see only 13gb used, I checked also checked with top from ssh.
Is this another regression, or is there something I don't know?

Here details of the proxmox software versions and vm configuration:
Code:
proxmox-ve: 8.3.0 (running kernel: 6.8.12-5-pve)
pve-manager: 8.3.1 (running version: 8.3.1/fb48e850ef9dde27)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-5
proxmox-kernel-6.8.12-5-pve-signed: 6.8.12-5
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.1.2
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.1
libpve-storage-perl: 8.3.0
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.3.2-1
proxmox-backup-file-restore: 3.3.2-2
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.3
pve-cluster: 8.0.10
pve-container: 5.2.2
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-2
pve-ha-manager: 4.0.6
pve-i18n: 3.3.2
pve-qemu-kvm: 9.0.2-4
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.2
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.6-pve1

Code:
qm config 101
agent: 1
balloon: 0
boot: order=scsi0
cores: 6
cpu: host
ide2: none,media=cdrom
memory: 8192
meta: creation-qemu=9.0.2,ctime=1733399411
name: PORTALE-FRONT
net0: virtio=BC:24:11:45:CD:C3,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
parent: pre-update
scsi0: nvme-disks:vm-101-disk-0,backup=0,cache=writeback,discard=on,iothread=1,size=150G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=b30e1113-3f18-4d9d-bd71-0ba9e3fd8fd2
sockets: 1
startup: order=2,up=30
vmgenid: 07ae9d12-8f05-480e-8c2f-973cef8430fd

Code:
qm config 102
agent: 1
balloon: 0
boot: order=scsi0
cores: 6
cpu: host
ide2: none,media=cdrom
memory: 12288
meta: creation-qemu=9.0.2,ctime=1733399532
name: PORTALE-BACK
net0: virtio=BC:24:11:F3:26:5A,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: nvme-disks:vm-102-disk-0,backup=0,cache=writeback,discard=on,iothread=1,size=50000M,ssd=1
scsi1: nvme-disks:vm-102-disk-1,backup=0,cache=writeback,discard=on,iothread=1,size=300G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=8e9d744e-841d-4029-9960-05e32754a694
sockets: 1
startup: order=1,up=30
vmgenid: 50176ee7-a20c-48fc-86b9-89f6beed74cc
 
Hi, I assume these are both Linux VMs? In case of Linux VMs, it can happen that the QEMU process only takes up the full amount of configured memory once the guest OS actually accesses it. You could try running stress-ng --vm 1 --vm-bytes N with N being e.g. 10G, and you should see the QEMU process on the host take up more memory. With ballooning disabled, the QEMU process will hold onto that memory even if stress-ng in the guest has exited.

[1] https://lwn.net/Articles/808807/
 
Thanks for reply.
The vm are both linux.
Anyway firstly I don't understand how to set correctly in proxmox to don't use the ballooning and have all the memory assigned to vm at the start, I thought that by disabling the ballooning option and not having other minimum and maximum memory fields but only a single one it would already do so.
In case of balloning effectively disable it would be seen immediately without the need for any RAM usage test.
From what you wrote to me, it seems that it uses ballooning even if it is disabled in the options.

EDIT:
I did other tests, even if with balloning enabled where allow setting minimum ram field and I set same of the maximum (to don't have a part of ram dinamically managed by balloning) seems still don't reserve maximum ram at vm start.
I'm not understanding if it's me who can't find the correct way or if with proxmox you can't disable dynamic ram management.
 
Last edited:
Did other tests but no way to allocate ram at start (do only on windows vms for major of ram for internal thing).

Looking .../pve-docs/chapter-qm.html#qm_memory seems that memory setted (or the minimum in case of balloning) is always available to the vm, but from the tests done it does not appear to be truly reserved and in both cases it is possible to start a vm with a quantity of RAM beyond the maximum of the host, also calculating the other active vms.
So I suppose there are no protections that prevent vm from starting (or atleast show a warning) in case of certain ram overuse (if there are no bugs), or am I wrong?
In this case it all depends on the users to set the RAM and start the VM calculating this thing in order to avoid problems, that is, it is essential that the total minimum RAM of the active VMs + minimum for the host to function (depending on what it uses) is never greater than the total RAM of the host.

EDIT:
I think would be useful to have atleast a warning when start a vm with "minimum ram" > that available memory of the host (free+buffer/cache).
It could warn at least some of the users that they could run into problems caused by the ram (with possible OOM on the host or on the VMs).
Then there could be secondarily a further warning if the host ram is less than the sum of all the minimum ram of the active vms + 1 gb for the host, although I suppose this has a lower probability of reaching problems due to excessive ram usage.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!