Proxmox Memory

Netcetera-Chris

New Member
Mar 12, 2024
14
0
1
Quick question.

I know i can over commit memory but why does the host not show the true ram on the summery?

For example

Host has 256GB RAM

and i run 4x VM's with 96GB ram assigned and its ballooning - each vm is only actually using 4GB each - so why does the host not reflect this and only show the ram actually being used?
 
Windows uses "unused" RAM for caching. That memory is not returned to the host. So it is "in use" even though Windows lies and tells you it is free (it is kind of a "white lie" because Windows can make some of it available instantly when needed). The "true ram" is the amount you allocated. The display on the dashboard is mainly for you to evaluate whether you gave the VM the right amount.

Overcommitting memory is a very bad idea unless you enjoy having your VM's killed by out-of-memory errors.
 
Last edited:
Thank you.

I understand over committing that is a bad idea, it was just for testing and debugging - is there any setting within windows that can release unused ram back to the host?
 
Currently, windows allocate all memory at boot. (filling all memory pages with zero).

The only way to release them, is to use balloning manually. (set min memory + shared=0).

Note that ksm works also fine, when you reach 80% memory usage, I'll deduplicated all zeros memory pages, but it'll take some time (minutes~hours)

Qemu have recently added support for hyperv-memory, when I'll be implemented in proxmox, I think It'll be possible to retreive unused windows memory.
 
Last edited:
I have a Debian VM running on Hyper-V under Windows 10 that I use for embedded development. I have to say that the dynamic memory thing is not really all that great. While it might be useful for some use-cases, for me it was just too slow to allocate more RAM when needed. It was on the other hand very eager to take some away. I ended up turning it off.

ETA: I think one issue was that compilers benefit from having a large cache but Hyper-V memory management favored keeping the cache small.

Even if it works well for a given use-case you still have the problem of potential OOM situations. If a bunch of VM's decide they need an amount of RAM that exceeds what the host has something has to give. Either the host swaps and kills performance, or it kills one or more VM's, or the VM's themselves kill important processes. All bad.

Nothing good comes from trying to overcommit memory. Sounds good in theory, works poorly in practice.
 
Last edited:
  • Like
Reactions: UdoB
Quick question.

I know i can over commit memory but why does the host not show the true ram on the summery?

For example

Host has 256GB RAM

and i run 4x VM's with 96GB ram assigned and its ballooning - each vm is only actually using 4GB each - so why does the host not reflect this and only show the ram actually being used?
I am not sure why people are mentioning windows, unless I missed something? To the OP, the answer is likely ZFS caching. Proxmox will use a big chunk of available memory to do ARC caching:

Limit ZFS Memory Usage​

ZFS uses 50 % of the host memory for the Adaptive Replacement Cache (ARC) by default. For new installations starting with Proxmox VE 8.1, the ARC usage limit will be set to 10 % of the installed physical memory, clamped to a maximum of 16 GiB. This value is written to /etc/modprobe.d/zfs.conf.

You can change the size of memory cache if you want. https://pve.proxmox.com/wiki/ZFS_on_Linux
 
I am new to Proxmox and have some 8.1-2 installations: The ARC cache isn't limited to 10% or 16GiB. On our host systems (with local ZFS-storage), the ARC has occupied ~50% of the memoy. Example of an running system:
- Installed memory: 256GB
- ZFS: 128 GB in use (50%)
- free: 26 GB (~10%)
- VMs/system: rest
So memory is nearly full. Do I have to limit the ARC value when adding more VMs, or is ARC "clever" enough to release it when needed? In this case it would be perfect to see on the GUI how much memory is occupied and needed by VMs and how much is used as "buffer/cache".
 
I am new to Proxmox and have some 8.1-2 installations: The ARC cache isn't limited to 10% or 16GiB. On our host systems (with local ZFS-storage), the ARC has occupied ~50% of the memoy. Example of an running system:
- Installed memory: 256GB
- ZFS: 128 GB in use (50%)
- free: 26 GB (~10%)
- VMs/system: rest
That only got changed from 50% to 10% some weeks ago. Older installations should still use the 50%.


Do I have to limit the ARC value when adding more VMs, or is ARC "clever" enough to release it when needed?
Depends. Yes, ARC will shrink when RAM is needed by something else, but it won't do that as responsive as the linux cache. So there might be situations where lots of RAM is needed very fast and then it might be too slow and you OOM and a guest will be killed.

In this case it would be perfect to see on the GUI how much memory is occupied and needed by VMs and how much is used as "buffer/cache".
We got lots of feature requests for this.
 
and i run 4x VM's with 96GB ram assigned and its ballooning - each vm is only actually using 4GB each - so why does the host not reflect this and only show the ram actually being used?
Also keep in mind that ballooning works not as most people think: It's not minimal and expand if needed, it's the other way around. Each VM will allocate the maximum and may volunteer memory back if needed otherwise or it may not and something will be killed due to an Out-of-Memory (OOM) condition.
As others have already said, best to just don't use ballooning.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!