Proxmox not allocating set memory to VMs

Analius

New Member
Feb 19, 2021
9
0
1
25
I have this weird issue where all of the VMs that I run, do not get the allocated amount of memory.
When rebooting the VM, the correct amount of memory is in fact available in the VM, but after a couple hours, the maximum allowed memory in the VM will be way less. The issue is resolved when I adjust the amount of memory in proxmox and reboot the VM through proxmox, but only temporarily. After a while, the issue appears again.

For example, a VM which I've set to have a memory of 16000MiB, will have that at first, but after a few hours, the maximum reported amount of memory by free will be around 2.7GBs. This happens on all of my VMs, though I only have VMs with the same OS, Ubuntu server 20.04.3.

Bash:
root@pve:~# pveversion
pve-manager/7.1-10/6ddebafe (running kernel: 5.13.19-3-pve)

What is causing this?
 
can you post the vm config and the output of 'free' from the vm?
 
Do you maybe use ballooning? Ballooning will slowly lower the RAM of a VM as soon as your Host RAM usage is above 80%.
 
can you post the vm config and the output of 'free' from the vm?

VM config:
Bash:
agent: 1
balloon: 3000
bios: ovmf
boot:
cores: 8
cpu: host
efidisk0: NVMe-Striped:vm-132-disk-0,efitype=4m,pre-enrolled-keys=1,size=1M
machine: q35
memory: 14000
meta: creation-qemu=6.1.0,ctime=1639849428
name: Ceres-02
net0: virtio=B2:7F:D3:43:DD:D0,bridge=vmbr0
net1: virtio=A2:CA:FC:66:78:F5,bridge=vmbr1
numa: 0
ostype: l26
scsi0: NVMe-Striped:vm-132-disk-1,aio=native,discard=on,size=50G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=a0d57b8d-92cc-48b1-a0c9-942bf5fbc65b
sockets: 1
tablet: 0
vmgenid: 5aac0a6a-b3d3-47c1-92d1-3ab6d04e66f9

Bash:
administrator@ceres-02:~$ free
              total        used        free      shared  buff/cache   available
Mem:        2686620     1565516      236176       12616      884928      828944
Swap:       4194300       19456     4174844

Do you maybe use ballooning? Ballooning will slowly lower the RAM of a VM as soon as your Host RAM usage is above 80%.
Hmm I do indeed use ballooning, I did not know that this was a "feature". Is there a way to disable this? I'm allowing ZFS to use 95% of my memory, as ZFS should free memory when other services need it.
This combined with ballooning, allows a large ZFS cache, while also allowing the VMs to use large amounts of memory when they need it, which is maybe 1% of the time. As most of the time the VMs do not actually need more than 2GB of memory, it seems like a waste to disable ballooning.
 
By the looks of things, this is a classic case of RTFM... I misunderstood the "minimum" setting for memory with ballooning enabled, it seems that increasing this should fix my issue.
 
Yes, minimum RAM for ballooning shouldn't be lower than what your VM alteast always might need. Because the host will remove the RAM from the VM no matter if that RAM is in use or not. So if you give that VM 14GB RAM with a minimum of 3GB and the VM is using 6GB for processes and 8GB for caching and then ballooning kicks in it will slowly reduce the VMs RAM from 14GB down to 3GB. So first the VM will be forced to drop the caches but ballooning won't stop at 6GB. It will reduce the RAM down to 3GB so the VM needs to kill 3GB of processes.
So overprovisioning the RAM isn't really possible.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!