[TUTORIAL] FABC: Why is ProxmoxVE using all my RAM and why can't I see the real RAM usage of the vms in the dashboard?

Johannes S

Renowned Member
Sep 7, 2024
818
529
93
This article is about a question frequently answered by community members. It was anspired by UdoBs great FabU posts ( https://forum.proxmox.com/search/7994094/?q=FabU&c[title_only]=1&c[users]=UdoB ) and aims to give an in depth-explaination to an often asked question so it can be referenced in future threads. It will be changed as needed, any Feedback/suggestions welcome :)

My dashboard shows that most of my RAM is used although my VMs and LXCs only have a small fraction allocated. Is this a memory leak bug?

Propably not. First you need to know that ProxmoxVE is based on Debian Linux with an ubuntu kernel. Linux Kernels are quite good in using unused memory for disk caches.
This is a good thing since it speeds things up, but often worries users who switched from a different operating system or hypervisor (although they also do similiar things, their monitoring just reports them differently).
In fact it would be more worriesome if you have say 128 GB RAM, but only 32 GB RAM would be used. Not used RAM is basically wasted. Some more information on this topic can be found on https://www.linuxatemyram.com/

I used free -m to determine the size of my caches but there is still used RAM in the dashboard which is neither used by the system caches nor by my VMs and lxcs.

Do you happen to use ZFS? ZFS uses part of the host memory for it's Adaptive Replacement Cache (ARC). It's size usage is not reflected in the free -m output. Normally ZFS would use around 50% of the system RAM but you can reconfigure it
in /etc/modprobe.d/zfs.conf Beginning with ProxmoxVE 8.1. this file is generated during system install and will limit the ARC usage to 10% of your installed physical memory or a maximum of 16 GiB.
Read https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_limit_memory_usage to change this limit or create the file if your system doesn't have this file


I looked up the RAM usage inside the VM, it actually uses a lot less RAM than I configured in the VM settings. Shouldn't the dashboard of ProxmoxVE reflect the real values?

The question is, which value is "real"? Inside the VM you use only part of the RAM, but the VM as a whole will still use all the RAM configured to it, other VMs/containers etc. can't use. So to actually fine tune your VMs memory usage (aka configure your VMs memory setting that it has exactly the RAM it needs, not more, not less) you will need a monitoring tool running inside your VM. Now for selecting the right software for this would be an post on it's own and I don't know your skill set and usecases. Also there are different types of monitoring software depending what you actally want to achieve.
For more information on the different types read this blog article by Kristian Koehntopp:
https://blog.koehntopp.info/2017/08/09/monitoring-the-data-you-have-and-the-data-you-want.html
And if you understand German (or have access to a good (machine) translator ;) ) Marianne Spiller did a great piece on Monitoring basics and analysing requirements, designing and imlementing a monitoring system:
https://www.unixe.de/monitoring-basics/

In our context we are talking about software Köhntopp calls a type one monitoring system. It's used for observing system parameters and alerting if something went wrong (e.G. a full disc), but can also be used to observe memory usage and CPU/system load.
Popular open source examples for that kind of monitoring software is Zabbix, Prometheus, Icinga2, nagios or checkmk. But there are also commercial offerings (like PRTG) and less used open source tools. Choose one of them (maybe after experimenting, with each of them in a small lxc or vm just for playing around), observe RAM usages of your containers and vms and adapt the VM settings to your needs.
Please also see https://forum.proxmox.com/threads/p...this-one-thing-the-memory-usage-graph.149473/ for a larger thread on memory usage monitoring in ProxmoxVE and the current implementation of the graphs.


Other hypervisors have options that RAM not used by one VMs is dynamically allocated to different VMs, how can I achieve this with ProxmoxVE?

In theory there are two options for Dynamic Memory Managment:
- Memory between different Linux VMs with Kernel Samepage Merging (KSM). This needs a lot of vms with similiar running applications and kernels to work, see https://pve.proxmox.com/wiki/Kernel_Samepage_Merging_(KSM) for more information
- VMs with a ballooning driver on your VMs the VMs can evict unused memory during runtime (memory ballooning). This driver is already included in modern Linux kernels, on Windows you first need to install the driver. In both cases you need to actually activate this feature in the VM settings. See https://pve.proxmox.com/wiki/Dynamic_Memory_Management for more informations.

But don't expect to much: First the RAM needs to get fully used before ballooning is even tried and you can never have to much RAM.

I still have a unanswered questions.

Feel free to ask here or in the original thread where this posting was referenced.

There is something wrong in this article / I have a suggestion you should add

I'm happy for any feedback, just drop me a PN or answer in this thread.
Version 0.1: First post
Version 0.2: Some typos, added information about the baloon driver, thanks to @Impact for the suggestions
 
Last edited:
Are you sure ballooning requires the guest agent?
It depends, to quote from the linked wiki page: https://pve.proxmox.com/wiki/Dynamic_Memory_Management

Requirements for Windows VM​

Installation​

Requirements for Linux VM​

Modern Linux Kernels does include the Balloon drivers by default. It works out of the box, and you only need to set the VM to "Automatically allocate memory within this range"

I didn't include it since I want people to actually read the documentation, the FABC is more like point of reference, I don't want to do the complete documentation again ;)
 
Maybe I'm a bit too pedantic here but I think the guest agent is separate from the ballooning driver/service and one can be used/installed without the other present.
 
Last edited:
  • Like
Reactions: Johannes S
Maybe I'm a bit too pedantic here but I think the guest agent is separate from the ballooning driver/service and one can be used/installed without the other present.
That's a good point, I updated my post. Thanks for your contribution :)
 
Other hypervisors have options that RAM not used by one VMs is dynamically allocated to different VMs, how can I achieve this with ProxmoxVE?
memory is not reserved at vm start (until you define static memory hugage in vm conf directly). so it can be dynamically allocated to a different vm.

then, if a vm is reserving a memory page, it's reserved. Note that windows is allocating all memory pages with 0 at at boot. (so it's reserved), where linux vm only allocate needed memory at boot.

if the vm guest os is freeing a memory page, the balloon driver can inform the proxmox host that the memory page is free. (something similar than discard for disk).
https://lwn.net/Articles/808807/
(so it's really important to not disable the balloon option, even if you don't use the ballooning min_size feature)


ksm is simply memory deduplication ((including 0 pages from windows) at host level when the host reach 80% usage.

Ballonning can decrease vm guest memory with defined "min memory" dynamically when host reach 80%.

guest agent is not used at all for memory management.

Ballooning driver is also used to retrieve guest memory stats by proxmox. (if you disable the balloon driver, you only see the whole qemu process memory usage at host level)
 
Last edited:
memory is not reserved at vm start (until you define static memory hugage in vm conf directly). so it can be dynamically allocated to a different vm.
Except for the surprising event that "PCI passthrough" in configured, then it must be "active" from the start because of the chance that DMA is used.

Disclaimer: as-far-as-I-know = never been there