Newbie question on RAM overprovisioning (8VMs - 64 GB RAM)

Do you happen to have a monitoring tool like PRTG, Prometheus, Zabbix or Icinga installed on the VMs? If not it might be worth to check, how many RAM each of the vms actually uses over a time frame. If it turns out that most of the time the VM doesn't actually use that much RAM it might ( just an idea havn't tested it) be worth to set the amount to a small default value and raise and lower it before the daily task ( given you can narrow down it's tineframe). As far I know raising and Lothringen RAM should be possible without powering down the VMs. I might be wrong on that though.
Zabbix is on the current machines, what you see on screenshot (VMs) is clear windows install, without any programms running, only iddling. The whole root cause of all this is that the SW is not good quality, if there will be any option to reasonably 'command' it I will use something similar to what you are mentioning. In the zabbix I clearly see that the machines usually sit around 2 GB, lowest recorded is 1.7 highest is 3.7 GB, calculation is always 14 to 16 GB for usually 5min, but can be 5s to 20min.
 
  • Like
Reactions: Johannes S
Zabbix is on the current machines, what you see on screenshot (VMs) is clear windows install, without any programms running, only iddling. The whole root cause of all this is that the SW is not good quality, if there will be any option to reasonably 'command' it I will use something similar to what you are mentioning. In the zabbix I clearly see that the machines usually sit around 2 GB, lowest recorded is 1.7 highest is 3.7 GB, calculation is always 14 to 16 GB for usually 5min, but can be 5s to 20min.

Is there a way to predict when the software will run the memory-hungry tasks or run more than one instance of the software on one VM? Another possibility might be to setup another Proxmox Server to split the load but since you mentioned your trouble in getting more RAM this is propably not a feasible option.

Did you already try to create swap files on the VM? I'm not sure how it's called on Windows (I only know the German name "physikalische Auslagerungsdatei") but this should work (at cost of performance) on the other hand I remember I already wrote this in an earlier reply so nevermind ;)
 
Can please someone experienced with PROXMOX let me know if allocation of memory based on configuration (NOT on real usage) is intended behavior of PROXMOX or I have wrongly configured my installation?
Proxmox allocates the memory based on the (maximum) memory configuration of the VM. So you better have enough swap space if it's more than the physical memory.
Whether all memory needs to be paged into actual RAM depends on the OS inside the VM and whether it accesses all memory during boot.
It also depends on how soon the balloon driver get loaded and whether ballooning is activated (due to Proxmox host being over the configurable ballooning threshold).
It also depends on whether KSM is active (Proxmox host being over the configurable memory threshold) and how quickly it can merge memory pages (like the ones with only zeros that Windows tends to create early on).

Make sure you have enough virtual memory/swap space. Set the KSM threshold low (and maybe increase its frequency) and start the VMs one at a time to see if KSM helps. I think you just have to try this and tweak things as you go along.
 
@Johannes S There is no prediction, it does not create any notifications, only to be seen by high memory usage. Swap files on VM would result in situation that EACH calculation is running from swap. It is improvement to situation when complete VM is 24/7 running from swap, but still significantly worse than independent HW.

@leesteken This is for my use case not ideal solution, but still thanks for the helpful explanation. Still this is for me hard to understand that memory is so 'wasted' that unused memory is allocated and needed memory is on swap, rendering the VMs to slideshows. Modified KSM_MONITOR_INTERVAL to 20 (was 60), KSM_SLEEP_MS to 25 (was 100), KSM start picking some pages but we speak about ~100 to 200 MB... better than 0, but can not change the overall situation.

Basically seems like for this current situation sticking to physical HW is the best case.

Personally I'm somehow surprised that all resources (CPU, HDD) are assigned on-demand (e.g. you can create 100GB HDD, but when you store 10GB only 10GB is really used on host), but RAM is assigned on-configuration regardless of real usage. For me this is really hard to understand why the same on-demand logic is not used with RAM. In my eyes RAM is most valuable resource, because non-server boards allow very little amounts of RAM to install, compared to for example H(S)DDs where you can have almost unlimited storage even on small office PC, and even if PC does not allow more it can be connected via network, USB...

Was (will be) feature 'RAM allocation on demand', even/ever considered? Is there possibility that it will be added to POXMOX features?

For my solution I will not use the PROXMOX for now, currently best for me is to stick to the physical HW and separate nodes. Maybe I will try ESXi on one new system, but does not give big hopes in it, because it does not like non-server HW.
 
Last edited:
@Johannes S There is no prediction, it does not create any notifications, only to be seen by high memory usage. Swap files on VM would result in situation that EACH calculation is running from swap. It is improvement to situation when complete VM is 24/7 running from swap, but still significantly worse than independent HW.

Understood. Did you try to use zramswap to get out more of your existing ram? https://pve.proxmox.com/wiki/Zram#Alternative_Setup_using_zram-tools

It's not optimal but at least should help you to utilice the existing resources better. Personally I would try to get the needed budget to do a RAM hardware but I understand that this isn't easy for you at the moment.

Personally I'm somehow surprised that all resources (CPU, HDD) are assigned on-demand (e.g. you can create 100GB HDD, but when you store 10GB only 10GB is really used on host), but RAM is assigned on-configuration regardless of real usage. For me this is really hard to understand why the same on-demand logic is not used with RAM. In my eyes RAM is most valuable resource, because non-server boards allow very little amounts of RAM to install, compared to for example H(S)DDs where you can have almost unlimited storage even on small office PC, and even if PC does not allow more it can be connected via network, USB...
I'm not quite sure so I used google with site:forum.proxmox.com to look for some insights. In following German thread @Dunuin made some interesting points: https://forum.proxmox.com/threads/ramverbrauch-bleibt-konstant.107279/post-461391

I used deepl to translate it. Please note that I'm not an KVM expert at all so I don't know whether everything is correct but his explaination fits what I know about the way Linux kernel and qemu handle memory management:
You can use ballooning, in which case the host retrieves the RAM from the VM. However, ballooning does not care whether a VM urgently needs the RAM or not. If you have set 8GB max RAM and 4GB min RAM for a VM and the VM is currently using 5GB for system/user processes and 3GB for caching, then ballooning will slowly rob the RAM until the VM is down to 4GB. First the VM will empty the 3GB cache, but after it is emptied ballooning will not stop and will take another GB of RAM. The VM then uses 1GB too much RAM, but also has no more caches that can be discarded, so the VM has to kill processes (because OOM) until 1GB has become free.

Conclusion: RAM overprovisioning does not really work. You should not allocate more RAM to the guests than the host actually has available. And so that the guests do not waste RAM unnecessarily through caching, it is best to only allocate just as much RAM to the guests as they need to be able to run.


Translated with DeepL.com (free version)

And another posting from Dunuin:

VMs run via KVM and the KVM process starts with a small memory footprint which corresponds to the guest system. However, KVM never seems to release RAM. The KVM process can therefore only grow when the guest uses more RAM, but never shrink again when the guest no longer needs the RAM.

If you don't give your guests more RAM than your server actually has, then it doesn't matter if the RAM is always full after some time. This only bothers you if you want RAM overprovisioning. I would not allocate more than 56 GB or 52 GiB RAM to the guests in total with your hardware (possibly 5-10GB more, depending on how much RAM KSM deduplicates for you). If you give your guests more RAM in total, then you are overprovisioning and run the risk of the OOM killer killing VMs.

And if a VM does not use all of its RAM for system/user processes, then the RAM is not directly wasted. Then the VM can still use the RAM for caching, which then increases performance.

Translated with DeepL.com (free version)

Please note, that the OP had 64 GB RAM available in that thread (so you know where Dunuins numbers come from). So in the end the issue is basically that the Linux kernel OutOfMemory-Killer (OOM-Killer) will start to kill processes which uses much RAM if no RAM is available any more (and on a Proxmox VE host obviouvsly VMs with large RAM allocation are a prime candidate for the kill list). You can stretch this a little bit with zramswap, KSM, swap etc but in the end nothing beats RAM expect more RAM.

Was (will be) feature 'RAM allocation on demand', even/ever considered? Is there possibility that it will be added to POXMOX features?

Your best bet would be to create a feature request at https://bugzilla.proxmox.com although I fear that due to the way the Linux memory managment works there isn't much the Proxmox team can do about this. I might be wrong though (as said I'm just a sysadmin, I have no skills in terms of Kernel or System Programming). At least it's more likely you will get a feedback from a staff member there ;)
 
Last edited:
  • Like
Reactions: UdoB and IcePlanet
I also found another thread from the English forum which also has a statement from a former staff member:
 
@Johannes S Thank you! It is really helpful source of information. The part about zramswap is interesting and I will investigate. The other parts basically confirm what I have seen on my test install. The good news is that all worked as designed, the bad news is it is not suitable for my use case
:)
 
  • Like
Reactions: Johannes S