If the memory is not used by the VMs, why overcommit? If the VMs don't all need all their allocated memory at the same time, make sure ballooning is turned on (by setting the minumum memory 2-3X lower). With ballooning the VMs can let Proxmox know when they don't need the memory and when they do (and you might still need swap to get through peaks). If you have many very similar VMs, maybe KSM can work for you to reduce memory pressure?... swap out the reserved RAM that's not being used by the virtual machines.
Yes, very much.Is this a bad idea?
* virtual machine's memory isn't really swappable, in praxis it will just result in the OOMKiller taking down your VMsIf so why.
From the tests I have ran I can see if I allocate a virtual machine 24GB min/max RAM once it uses the 24GB the total RSS memory will remain at 24GB until powered down. The only way around is when swap starts kicking in and it returns some unused RAM. Ballooning is enabled however setting the min RAM lower is not an option as it's broken in my experience showing less than the 24 GB using free -m and causing applications to run OOM inside the virtual machines.I expect terrible performance, so much that i have not tried something like this, so I cannot say for certain.
If the memory is not used by the VMs, why overcommit? If the VMs don't all need all their allocated memory at the same time, make sure ballooning is turned on (by setting the minumum memory 2-3X lower). With ballooning the VMs can let Proxmox know when they don't need the memory and when they do (and you might still need swap to get through peaks). If you have many very similar VMs, maybe KSM can work for you to reduce memory pressure?
Do you have a specific use-case in mind? Maybe someone can give advice based on your actual plans?
zram
module, I made some good experience with it on very limited HW, but I only used it as last effort there, so do not take that as suggestion that it is production-ready/recommended.A key point is that not every memorys is swappable. Besides that, a more general article about swap which I personally found quite OK could be: https://chrisdown.name/2018/01/02/in-defence-of-swap.htmlI have been advised to have at least 200GB of swap to achieve this but I am still reading how swap fully works.
As in if I was to run free -m inside the VM it would show 5GB usage 19GB free and 5/24GB on the Proxmox GUI graph. Would it not know that by Ballooning being enabled? The guide saysWhat do you mean with real RAM usage? For the hypervisor it doesn't matter if the guest fills its RAM with actual data or uses it as cache.
As pointed out, VM RAM is not really swappable. How should the hypervisor know what can safely be swapped and what would better not be swapped?
Three people, including one staff, adviced to skip that idea. Now it's up to you to decide.
I did take a quick look into zram but I don't understand it. Will continue to read.If you do not care for the computational overhead then you could also look into the use of thezram
module, I made some good experience with it on very limited HW, but I only used it as last effort there, so do not take that as suggestion that it is production-ready/recommended.
What OS/Distro runs in the guest?
In general the answers of avw, namely ballooning and KSM (Kernel Same-Page Merging) are often of help in such cases.
A key point is that not every memorys is swappable. Besides that, a more general article about swap which I personally found quite OK could be: https://chrisdown.name/2018/01/02/in-defence-of-swap.html
I have tried giving it a small amount of swap but the swap inside the virtual machine never gets used for the reasons above (usage is low, drop cache).Hmm, too bad ballooning is not working quickly enough. Have you tried adding 24GB of swap inside each VM and using a 24G/12G setting? The OS inside the VM can make better decisions about what to swap (or free in case of file cache) than the Proxmox host. And it would give the VM a way to already allocate memory before ballooning releases memory to it. This might prevent OOM inside the VMs (and just make it slow, hopefully temporarily as ballooning rebalances). Even then 20 maybe be too much. It really depends on whether your applications inside the VMs can work with swap instead of real memory, whether they all need it at the same time and/or are erratic in memory allocation.
You cited it yourself. Where do you read "the hypervisor knows about the guest's RAM layout"?As in if I was to run free -m inside the VM it would show 5GB usage 19GB free and 5/24GB on the Proxmox GUI graph. Would it not know that by Ballooning being enabled? The guide says
"Memory ballooning (KVM only) allows you to have your guest dynamically change it’s memory usage by evicting unused memory during run time. It reduces the impact your guest can have on memory usage of your host by giving up unused memory back to the host."
Works as intended, I would say.The only time I see RAM being returned is if min is set lower than max or if swap kicks in on the node.
I would rather attach only half the RAM to the guests and grant them some swap than trying to swap on hypervisor level.I have tried giving it a small amount of swap but the swap inside the virtual machine never gets used for the reasons above (usage is low, drop cache).
I do not want to use LXC I know it will be better but for me it's not an option. But this happens already once swap kicks in? I am just doing it more frequently.Sounds like you are running Linux in those VMs; why not use containers? They use a little less memory and the host is fully aware of memory usage and can make better choices about dropping caches or swapping. If swapping inside the VM does not even work, then swapping on the host is going to be even worse: the host does not know the purpose of the memory it is swapping and the VM has the mistaken conviction that it has all the real memory in the world. Once you need to go dropping caches all the time, I expect that you will never have happy VM users.