Memory missing from VM

masgo

Active Member
Jun 24, 2019
66
14
28
74
I have an Ubuntu 18.04 LTS VM running on PVE 7 which I assigned 32 GB of RAM and enabled ballooning. The VM does batch-processing and is using > 20 GB RAM for a few minutes, after which it goes back to use close to none (less then 500 MB); that's why ballooning is enabled.

The strage part is, that RAM goes missing sometimes. I mean the total amount of RAM the VM has drops well below 32 GB while the VM is running. Lowest I have seen was 14 GB; `htop` then shows only 14 GB total, `free -h` shows only 14 GB total, `/proc/meminfo` shows only 14 GB `MemTotal`.

My first guess was, that the ramdisk which I use might cause this. But limiting the ramdisk to 1 GB did not improve things.
Also the RAM is still gone even if I reboot from within the VM. If I do a VM shutdown from proxmox and restart it, the full ram is back.

Any Idea what might cause this?
 
can you post your vm config?
 
Code:
agent: 1
balloon: 1024
boot: cdn
bootdisk: scsi1
cores: 2
cpu: host
ide2: none,media=cdrom
memory: 32768
name: testvm
net0: virtio=52:54:00:19:f0:94,bridge=vmbr0,firewall=1,tag=19
numa: 1
onboot: 1
ostype: l26
protection: 1
scsi1: local-zfs:vm-403-disk-0,discard=on,size=18G
scsihw: virtio-scsi-pci
smbios1: uuid=f3f14df0-ea96-48db-9ccf-2ccc222c25ab
sockets: 1
tablet: 0
vmgenid: 742530e1-b048-456d-8878-da2e5af5f8e6
 
Thats how ballooning works. You set it to 1GB min and 32GB max RAM. So as soon as your host exceeds 80% RAM usage it will start ballooning and then PVE will slowly remove the guests RAM from 32GB until either 1GB RAM is reached or your hosts RAM usage isn't longer above 80%. It will force the guest to free up the RAM by slowly removing it. So its normal that your guest only got something between 1GB and 32GB of total memory. And ballooning won't care if the guest is needing that RAM or not. It will just remove it and if the guest OS already droped all caches and still needs more RAM it will OOM kill the running processes. So if your processes need sometimes 20+ GB RAM you shouldn't set the min RAM lower than that so your batch-processing process won't get killed.
 
Last edited:
  • Like
Reactions: masgo
Oh, I did not know about that 80% limit, there is also no hint in the documentation. But it all makes sense now. After rebooting the full 32 GB are available and it gets shrunken down to about 14 GB. At that point the host has exactly 80 % RAM free.

How can I influence this? The server has 128 GB RAM. 80 % means it leaves ~25 GB free. Since I use ZFS, which does really aggressive caching, which might take up 50 % of the RAM, this leaves only a tiny fraction for the VMs.

At the moment, I have 7 VMs running on this server, they all have ballooning enabled with a maximum setting of 4x 4GB, 16 and 32 GB. If all VMs would use their maximum amount of RAM it would mean 64 GB of RAM. As I see it, the remaining 64 GB should be comfortably enough for ZFS and the PVE host. Also KSM sharing is in use and usually gives back ~16 GB, because 5 of the VMs are very similar Windows VMs and the other two are similar Ubuntu VMs.

So, how can I change this 80 % value for the ballooning?

The two large VMs don't need the RAM all the time. In fact, one of the VMs does only need RAM during working hours, while the other runs some batch-processing outside of working hours. Which seems like the perfect application for ballooning.
 
There is a article in the Wiki explaining this:

Memory​

For each VM you have the option to set a fixed size memory or asking Proxmox VE to dynamically allocate memory based on the current RAM usage of the host.
screenshot/gui-create-vm-memory.png

Fixed Memory Allocation​

When setting memory and minimum memory to the same amount Proxmox VE will simply allocate what you specify to your VM.
Even when using a fixed memory size, the ballooning device gets added to the VM, because it delivers useful information such as how much memory the guest really uses. In general, you should leave ballooning enabled, but if you want to disable it (e.g. for debugging purposes), simply uncheck Ballooning Device or set
balloon: 0
in the configuration.

Automatic Memory Allocation​

When setting the minimum memory lower than memory, Proxmox VE will make sure that the minimum amount you specified is always available to the VM, and if RAM usage on the host is below 80%, will dynamically add memory to the guest up to the maximum memory specified.
When the host is running low on RAM, the VM will then release some memory back to the host, swapping running processes if needed and starting the oom killer in last resort. The passing around of memory between host and guest is done via a special balloon kernel driver running inside the guest, which will grab or release memory pages from the host. [10]
When multiple VMs use the autoallocate facility, it is possible to set a Shares coefficient which indicates the relative amount of the free host memory that each VM should take. Suppose for instance you have four VMs, three of them running an HTTP server and the last one is a database server. To cache more database blocks in the database server RAM, you would like to prioritize the database VM when spare RAM is available. For this you assign a Shares property of 3000 to the database VM, leaving the other VMs to the Shares default setting of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32 * 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 * 3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will get 1.5 GB.
All Linux distributions released after 2010 have the balloon kernel driver included. For Windows OSes, the balloon driver needs to be added manually and can incur a slowdown of the guest, so we don’t recommend using it on critical systems.
When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB of RAM available to the host.
 
Oh, I did not know about that 80% limit, there is also no hint in the documentation. But it all makes sense now. After rebooting the full 32 GB are available and it gets shrunken down to about 14 GB. At that point the host has exactly 80 % RAM free.
There is a article in the Wiki explaining this:

Memory​

For each VM you have the option to set a fixed size memory or asking Proxmox VE to dynamically allocate memory based on the current RAM usage of the host.
screenshot/gui-create-vm-memory.png

Fixed Memory Allocation​

When setting memory and minimum memory to the same amount Proxmox VE will simply allocate what you specify to your VM.
Even when using a fixed memory size, the ballooning device gets added to the VM, because it delivers useful information such as how much memory the guest really uses. In general, you should leave ballooning enabled, but if you want to disable it (e.g. for debugging purposes), simply uncheck Ballooning Device or set
balloon: 0
in the configuration.

Automatic Memory Allocation​

When setting the minimum memory lower than memory, Proxmox VE will make sure that the minimum amount you specified is always available to the VM, and if RAM usage on the host is below 80%, will dynamically add memory to the guest up to the maximum memory specified.
When the host is running low on RAM, the VM will then release some memory back to the host, swapping running processes if needed and starting the oom killer in last resort. The passing around of memory between host and guest is done via a special balloon kernel driver running inside the guest, which will grab or release memory pages from the host. [10]
When multiple VMs use the autoallocate facility, it is possible to set a Shares coefficient which indicates the relative amount of the free host memory that each VM should take. Suppose for instance you have four VMs, three of them running an HTTP server and the last one is a database server. To cache more database blocks in the database server RAM, you would like to prioritize the database VM when spare RAM is available. For this you assign a Shares property of 3000 to the database VM, leaving the other VMs to the Shares default setting of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32 * 80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9 * 3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will get 1.5 GB.
All Linux distributions released after 2010 have the balloon kernel driver included. For Windows OSes, the balloon driver needs to be added manually and can incur a slowdown of the guest, so we don’t recommend using it on critical systems.
When allocating RAM to your VMs, a good rule of thumb is always to leave 1GB of RAM available to the host.

How can I influence this? The server has 128 GB RAM. 80 % means it leaves ~25 GB free. Since I use ZFS, which does really aggressive caching, which might take up 50 % of the RAM, this leaves only a tiny fraction for the VMs.
If you think ZFS doesn't need 50% of your RAM for caching you could limit the ARC size to whatever you want. Also described in the wiki:

Limit ZFS Memory Usage​

ZFS uses 50 % of the host memory for the Adaptive Replacement Cache (ARC) by default. Allocating enough memory for the ARC is crucial for IO performance, so reduce it with caution. As a general rule of thumb, allocate at least 2 GiB Base + 1 GiB/TiB-Storage. For example, if you have a pool with 8 TiB of available storage space then you should use 10 GiB of memory for the ARC.
You can change the ARC usage limit for the current boot (a reboot resets this change again) by writing to the zfs_arc_max module parameter directly:
echo "$[10 * 1024*1024*1024]" >/sys/module/zfs/parameters/zfs_arc_max
To permanently change the ARC limits, add the following line to /etc/modprobe.d/zfs.conf:
options zfs zfs_arc_max=8589934592
This example setting limits the usage to 8 GiB (8 * 230).
In case your desired zfs_arc_max value is lower than or equal to zfs_arc_min (which defaults to 1/32 of the system memory), zfs_arc_max will be ignored unless you also set zfs_arc_min to at most zfs_arc_max - 1.
echo "$[8 * 1024*1024*1024 - 1]" >/sys/module/zfs/parameters/zfs_arc_min
echo "$[8 * 1024*1024*1024]" >/sys/module/zfs/parameters/zfs_arc_max
This example setting (temporarily) limits the usage to 8 GiB (8 * 230) on systems with more than 256 GiB of total memory, where simply setting zfs_arc_max alone would not work.
If your root file system is ZFS, you must update your initramfs every time this value changes:
# update-initramfs -u
You must reboot to activate these changes.


So, how can I change this 80 % value for the ballooning?
Not sure if you can change that limit for ballooning. Never see an option for that. KSM also got such a limit (80% too by default) but that KSM limit can be changed editing the "KSM_THRES_COEF=20" in the "/etc/ksmtuned.conf".
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!