Memory question

CasparL

Member
May 26, 2022
8
1
8
Hi all,

I have been trying to find information about this, but so far I have not been able to find an answer that explains my question. I will try to explain it as detailed as possible.

AT this moment I have Proxmox running with a machine with 64 GB of RAM, the total assigned RAM is 52.

When I start machines with dedicated (no ballooning allowed) memory, for example 8 GB, it uses more then 1,5 times the memory. In this case, it's using 12,8 GB memory, for a 8 GB assigned VM.

If I look at the VM's that do have ballooning, what is used and what is assgined, it also is close to 1,5 times the current memory.

Now my question mainly is; is it correct that this is close to 1,5 times the memory (50% overhead I find quite a lot though), and where does it come from. Because if it is, then that means that running all my machines at once is around 78 GB memory, thus my entire plan for all my VM's is out of the window and I would require a lot more memory then I expected.

I understand that the core-system also needs some memory, but I kind of assumed that I would be able to run about 50 - 55 GB of RAM without needing more. Is there something I need to to make sure it's possible that I will not need that much overhead, or what would I need to look out for.

Relevant information:
- 1 TB NVMe
- 16 thread processor
- 64 GB RAM
- ZFS (recommended) installation

Thanks in advance!

Edit:
I did some adjusting based on this thread: https://forum.proxmox.com/threads/proxmox-6-x-consumes-more-memory-than-assigned-using-zfs.79520/
What I did is disable KSM, and it looks like it's freeing up a lot of memory because of this. Might this be the reason for it, or am I looking at things wrong? since I have been using about 20 GB less RAM now then I did before.

If KSM giving such an overhead, would this not actually be worse, and beat its own purpose?
 
  • Like
Reactions: trunky
I see between 7% (of 32G) and 50% (of 2.5G) larger virtual address space but the actual resident memory is only 4% extra. How are you measuring the memory usage?

KSM should do the exact opposite of what you are saying. With KSM, you can have more in memory than actual memory because some of it is shared. Maybe this is the effect you are seeing: with KSM everything separately can use more memory and the total of those usages is larger than the actual memory in use (and possible larger than the physical memory) because of the sharing.
When you disable KSM, there is less memory available (and caches get dropped) and everything starts to use less memory, so the total usages is less (with the same actual memory usage) and performance might also be less because less memory can be used for caches. It depends on what number (sum of all memory allocations/usages, or the actual memory/pages in use) you look at.
 
I looked using free -m and it showed 7218 available Memory, and after disabling ksm, it now shows 22281 available memory, where after disabling ksm I have started a machine more, and it uses about 13 GB memory less according to the free-m, so i'm kinda finding that strange, according to what you're saying.
 
Don't forget that KSM will by default only work when hosts RAM is over 80% utilization. Lets say you are at 75% (so 48GB of 64GB) and KSM wasn't working as KSM shows 0GB shared. Then you start a 8GB VM and your RAM goes up to 88% (so 56GB of 64GB). Now KSM will kick in as the hosts RAM is now above 80%. It will go through the whole 64GB of RAM and deduplicate pages that are identical. Lets say this will result in 16GB of RAM usage and KSM will show 8GB shared. Now RAM is down to 40GB of 63%. Now KSM will stop working as RAM utilization is below 80%. Now this shared RAM will never be freed up, even if you stop those VMs, that used this shared memory, as KSM isn't analyzing that RAM anymore. To actually free up the not longer needed parts of the shared RAM you will need to bring the hosts RAM utilization to above 80% again so KSM continues working.
Using KSM you sacrifice CPU and RAM performance for some saved space in RAM. So the idea is that KSM is only working if you run out of RAM, so you are not wasting your CPU/RAM performance when not actually needed.
But I personally got annoyed but KSM always switching on and off so I changed the threshold by editing the ksmtuned config file. Now my KSM is working as long as the hosts RAM is above 60% and because it is always at atlest 70% this means that KSM is basically always on.
 
Thanks Dunuin, my "issue" has that I have always been in between 80-90% all the time, even though (with KSM enabled) I had only about 52 (in total; including machines that were offline) assigned from 64.

With the machines that was powered on, I would only use about 32 GB of ram *maximum* without ballooning. With ballooning, I guess I would use about ~24, and with KSM enabled I think I should've used about 20 or even less, and not the 55+ GB that actually were assigned in the end?

This is the part that I would like to have an answer on, why would this have happened, what's the cause of it. I mainly seek to understand why this is happening, since if I look at Hyper-V and VMware with memory usage, yes there might be a tiny bit overhead, but definitely not 1,5 times or even more sometimes.

I am also wondering, there is no swapfile on the proxmox server, is it recommended to have it like this, or would you recommend to have a swapfile with x size (based on memory, or ...)?
 
You are using ZFS. Keep in mind that ZFS will use up to 50% (so 32GB) of your hosts total RAM for caching. And the ARC is userspace memory and doesn't count as "buffer/cache" concering linux or free -m. So in case you didn't limited the ARC size it could be that all that RAM is just used by ZFS.
 
So, you would recommend limiting the ZFS memory usage, or is there a different, because I'm "kind of new" to this, but as far as I have noticed, ZFS is reccomended for my system, but if it uses this much RAM, I kind of feel like, I shouldn't have gone with ZFS.
 
It uses UP TO 50%. But as ZFS's ARC cache is userspace Linux can't drop it fast when other processes are needing it. In case a process needs alot of more RAM in a short time it might happen that the ARC can't be freed up in time and a process will be killed because you are running out of RAM. In such a case you it would be a good idea to limit the ARC size.
How much RAM your ARC is currently, minimal and maximal using you can see when running arc_summary.
Just said it so you don't wonder where all your RAM goes, as RAM used by ARC won't be listed by free -m as cache and won't be listed as used by a process when viewing processes using top or htop.
How much RAM ZFS really needs depends on a lot of factors. For example how fast you want your storage to be, if you are using HDDs or SSDs and how much raw storage all your disks got.
 
Hi Dunuin,

For now I used the guides to limit it to 6 GB of RAM.
I have it mainly as some "low priority" since it's home / lab use where speed doesn't matter that much
The configuration is 2x 1 TB nvme drive in a ZFS RAID configuration
min: 6442450943
max: 6442450944
 
Why the increased minimum?
What I read, is that sometimes the minimum could be above the maximum, and thus reccomended the minimum to be the same as the maximum -1
If there is a better value for this on a different value please let me know
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!