RAM thresholds?

Dunuin

Distinguished Member
Jun 30, 2020
14,796
4,672
258
Germany
Hi,

I got 64GB RAM and allocated 74GB to my VMs and 8GB ARC Cache which is basically fine and I'm at around 70% RAM usage.
What are you doing to optimize the RAM?

I set KSM to start at 60% RAM usage so it will start deduplication before swapping. This will save around 11GB RAM.
I set swappiness to 1 so it will only swap late if the RAM gets really full.

What I don't understand is threshold for the balooning. I saw some VMs swapping, looked at the VMs RAM usage in Proxmox GUI and there was enough RAM free. Then I ran "top" inside the VM and saw that the VM only got 4GB RAM and Proxmox reported 8GB. So the balooning changed the VMs RAM from the maximum 8GB to the minimum of 4GB what wasn't enough for the VMs using 7GB RAM before.
When does the balooning start to limit the VMs RAM? Is there a fixed threshold?

And I wasn't able to limit the caching of the VMs. I'm running 13 Debian VMs and all of them try to use up to 90-95% of the available RAM for caching (using kvm cache mode=none). Some VMs need alot of RAM but that only for short times but after that they don't try to free the cached data so it would be available again for the other VMs.
That behavior is really annoying if you try to overprovisionize the RAM. How are you handling this?
 
Some VMs need alot of RAM but that only for short times but after that they don't try to free the cached data so it would be available again for the other VMs.

That is normal OS behaviour and has been for decades.

That behavior is really annoying if you try to overprovisionize the RAM. How are you handling this?

Therefore you don't overprovisionize the RAM, it has unforseeable consequences in corner cases and at least I want to have working machines all the time. RAM is so cheap and the simple solution for this problem.

If you really want to have the RAM really free, just flush the caches regularly. You will have a lot of free RAM and also slower machines.
 
That is normal OS behaviour and has been for decades.



Therefore you don't overprovisionize the RAM, it has unforseeable consequences in corner cases and at least I want to have working machines all the time. RAM is so cheap and the simple solution for this problem.

If you really want to have the RAM really free, just flush the caches regularly. You will have a lot of free RAM and also slower machines.
I populated all 4 RAM slots to get that 64GB RAM. More RAM (I think 512GB) would be possible but that really would exceed my finances. It's just a private homeserver.
I don't want to flush the complete cache. I'm fine with it if it uses the RAM to cache the most used files. I just would like the freeing of the cache to be more agressive so no hours old and never used again data will be removed from cache.

BTW, how usefull is the caching of linux inside the VMs if the virtual harddisks are backed by the ARC cache of the host? Actually files are cached 3 times right? On the physical drives cache itself, in RAM on the ARC cache of the host and in RAM on the guests. At least for databases and so on caching on the guests should make sense because the virtio SCSI virtualization layer will be bypassed.

Edit:
I made sure that all VMs will work fine under load when operating with minimum RAM. And all minimum RAM combinded + ARC is less than 64GB. I just want the balooning so VMs can share the same free RAM for additional but not mandatroy stuff like caching. For example I got a VM running emby which will use the CPU to encode and stream videos. Most of the time the VM is just idling and would not need more than 2GB of RAM. I set a minimum RAM of 4GB which is enough to operate under all conditions. But it set the maximum RAM to 8GB so it can use some more RAM if additional free RAM is available on the host. If there is more RAM available emby will store more of the temporary stream data in RAM and needs less to write temporary files to the ssd. So it will operate fine if there is not more than the minimum RAM available but it is stupid if it is forced to write GBs of temporary data to the SSD when there is no additional RAM, because some other VMs just wont free up the linux cache that isn't used for hours/days, just because linux will use all what it can use.
 
Last edited:
I suggest to you do not use ZFS on hyperconverged solution.... First problem is your point is correct on Promox system if you do not use virtual disk cache then ZFS useless, if you will use that then host linux, guest operation system and ZFS arc use cache also will be use buffer when your disk can not write all data simultaneously. Also Direct ZFS will create high CPU IO that means your CPU "wa" will be grow up...

You can see more free RAM on your Proxmox UI but actually not, ( watch that memory with dstat then you will be understand me ) that is not real, if your RAM will full then your disk write speed going down and ZFS wil be useless.. For Hyperconverged soliton on Proxmox best is Thin LUN..

For Memory balloning you shuld install guest agent also lowly windows operation system like Windows2012/R2 very stubborn about give empty memory to Proxmox.. You can try copy from anyware to windows and re copy that file inside windows, then wndos will use crazy buffer and cache..

I am using my disk as attached picture file, also I have very big SWAP are on one 120GB SSD because if balloning or KSM can not assit me then I need big SWAP Area.. If you need compression and deduplication on your system n future you can use VDO ( not working now with big data. )

disk.PNG
 
can't disagree more @ertanerbek
ZFS is great. Once you are aware how it operates and how you have to deal with - all fine.

Limiting ARC makes sense, because otherwise you can end up in trouble - and that is what kernel-parameters can do.
I'd limit the ARC to 8GB and see how it goes. To my experience this is just fine with 4TB storage (usable) on ZFS.

Aside that. Memory overcommit is dangerous (as all overcommits) - try to avoid it and size the VM resources reasonable.
Ballooning also means some pressure on the host. Less worse than swapping to disk, but far away from optimum. Don't balloon for the sake of ballooning!
 
we all have our own experiences :)


Also please do not forget " Free is not free" also on computer system if you are earning from anyware ,you must be losing somewhere :)
 
Last edited:
I limited ARC to 8GB. Thats totally fine (99.509 % ARC hit rate).

This is my RAM utilization. KSM also works fine after I lowered the threshold so it kicks in at 60% instead of 80%. With swappiness of 1 there is nearly no swapping happening under 90% of RAM utilization on the host:
ram.png

Guests are right now 14 Debians. All with qemu-guest-agent, swappiness of 1 and a 2GB swap partition, cache mode=none.


RAM and utilization for the guests:
The values for total, used, free and buff/cache are collected inside the guest with "free -h" command.
Usage GiB and Usage % are collected from the Proxmox GUI.
Code:
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+
| Guest Nr. | Min RAM | Max RAM | Balooning | Usage GiB | Usage % | total | used   | free   | buff/cache |
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+
| 1         | 8GB     | 8GB     | no        | 7.24GiB   | 90.5%   | 7.8G  | 5.7G   | 0.13G  | 2.0G       |
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+
| 2         | 4GB     | 4GB     | no        | 2.66GiB   | 66,6%   | 3.9G  | 1.3G   | 0.9G   | 2.3G       |
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+
| 3         | 4GB     | 8GB     | yes       | 2.87GiB   | 35,9%   | 7.8G  | 1.8G   | 4.9G   | 1.1G       |
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+
| 4         | 4GB     | 4GB     | no        | 3.48GiB   | 87%     | 3.9G  | 1.3G   | 2.1G   | 0.5G       |
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+
| 5         | 2GB     | 4GB     | yes       | 0.6GiB    | 14.6%   | 3.9G  | 0.21G  | 3.3G   | 0.4G       |
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+
| 6         | 1GB     | 2GB     | yes       | 0.47GiB   | 23%     | 1.9G  | 0.12G  | 1.5G   | 0.36G      |
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+
| 7         | 6GB     | 6GB     | no        | 5.33GiB   | 88.9%   | 5.8G  | 1.8G   | 1.7G   | 2.3G       |
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+
| 8         | 8GB     | 10GB    | yes       | 4.39GiB   | 43.9%   | 9.8G  | 3.8G   | 5.4G   | 0.66G      |
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+
| 9         | 4GB     | 4GB     | no        | 1.13GiB   | 28.3%   | 3.9G  | 0.56G  | 2.8G   | 0.47G      |
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+
| 10        | 2GB     | 4GB     | yes       | 0.5GiB    | 12.2$   | 3.9G  | 0.15G  | 3.4G   | 0.35G      |
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+
| 11        | 2GB     | 4GB     | yes       | 0.45GiB   | 10.9%   | 3.9G  | 0.1G   | 3.4G   | 0.35G      |
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+
| 12        | 2GB     | 4GB     | yes       | 0.73GiB   | 17.7%   | 3.9G  | 0.32G  | 3.1G   | 0.41G      |
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+
| 13        | 1GB     | 2GB     | yes       | 0.5GiB    | 24.6%   | 1.9G  | 0.12G  | 1.5G   | 0.39G      |
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+
| 14        | 2GB     | 4GB     | yes       | 0.62GiB   | 15.2%   | 3.9G  | 0.23G  | 3.2G   | 0.4G       |
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+
|           |         |         |           |           |         |       |        |        |            |
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+
| Sum       | 50GB    | 68GB    |           | 30.97GiB  | 39,95%  | 66.2G | 17.51G | 37.33G | 11.99G     |
+-----------+---------+---------+-----------+-----------+---------+-------+--------+--------+------------+

"free -h" on proxmox host:
Code:
 free -h
              total        used        free      shared  buff/cache   available
Mem:           62Gi        43Gi        13Gi        62Mi       6.1Gi        18Gi
Swap:          61Gi       1.0Mi        61Gi

Someone knows why Proxmox GUI is showing VM Nr.4 RAM usage as "3.48GiB and 87%" when "free -h" is showing "used=1.3G, free=2.1G, buff/cache=0.5G"? Some calculating difficulties because of KSM running?
 
that caused KSM also Memory Balloning, on that system you neerly use 56GB ram.. Also if you not have any data transfer on that moment ARC not cover that 8GB Ram... options zfs zfs_arc_max means if you need RAM memory then you will be use this MAX value..

Do not fear from swap if you have SSD/NVME based swap, on swap area most imoportant point is IO and SSD/NVME IO performance enought for SWAP.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!