Does swap have much of a impact on performance for virtual machines?

harmonyp

Member
Nov 26, 2020
196
4
23
46
I am looking to overcommit RAM by adding lots of swap to my node (2-3X the total amount of RAM) to swap out the reserved RAM that's not being used by the virtual machines.

Is this a bad idea? If so why.
 
I expect terrible performance, so much that i have not tried something like this, so I cannot say for certain.
... swap out the reserved RAM that's not being used by the virtual machines.
If the memory is not used by the VMs, why overcommit? If the VMs don't all need all their allocated memory at the same time, make sure ballooning is turned on (by setting the minumum memory 2-3X lower). With ballooning the VMs can let Proxmox know when they don't need the memory and when they do (and you might still need swap to get through peaks). If you have many very similar VMs, maybe KSM can work for you to reduce memory pressure?

Do you have a specific use-case in mind? Maybe someone can give advice based on your actual plans?
 
Is this a bad idea?
Yes, very much.

If so why.
* virtual machine's memory isn't really swappable, in praxis it will just result in the OOMKiller taking down your VMs
* memory overcommitment means that there's no available memory to be used for the page cache, IO will get slower
* even if it would work, you're guaranteed to cause a lot of IO as pages are constantly swapped in swapped out, this will make every slow, slow as back to the 80s slow.


What's your actual use case? I.e., what services do you want to provide in the VMs and why do you want to overcommit that badly?
 
I expect terrible performance, so much that i have not tried something like this, so I cannot say for certain.

If the memory is not used by the VMs, why overcommit? If the VMs don't all need all their allocated memory at the same time, make sure ballooning is turned on (by setting the minumum memory 2-3X lower). With ballooning the VMs can let Proxmox know when they don't need the memory and when they do (and you might still need swap to get through peaks). If you have many very similar VMs, maybe KSM can work for you to reduce memory pressure?

Do you have a specific use-case in mind? Maybe someone can give advice based on your actual plans?
From the tests I have ran I can see if I allocate a virtual machine 24GB min/max RAM once it uses the 24GB the total RSS memory will remain at 24GB until powered down. The only way around is when swap starts kicking in and it returns some unused RAM. Ballooning is enabled however setting the min RAM lower is not an option as it's broken in my experience showing less than the 24 GB using free -m and causing applications to run OOM inside the virtual machines.

As an example the plan is to be able to run 20+ virtual machines allocated 24 GB of RAM on a node that has only 256GB RAM these virtual machines on average will probably use less than 5GB of RAM but they like to reserve (RSS) the full 24GB of RAM. Lowering the RAM is not something I am looking for, the node should never reach real OOM as real RAM average usage would only be 100GB/256GB.

I have been advised to have at least 200GB of swap to achieve this but I am still reading how swap fully works.
 
What do you mean with real RAM usage? For the hypervisor it doesn't matter if the guest fills its RAM with actual data or uses it as cache.
As pointed out, VM RAM is not really swappable. How should the hypervisor know what can safely be swapped and what would better not be swapped?
Three people, including one staff, adviced to skip that idea. Now it's up to you to decide.
 
If you do not care for the computational overhead then you could also look into the use of the zram module, I made some good experience with it on very limited HW, but I only used it as last effort there, so do not take that as suggestion that it is production-ready/recommended.

What OS/Distro runs in the guest?

In general the answers of avw, namely ballooning and KSM (Kernel Same-Page Merging) are often of help in such cases.

I have been advised to have at least 200GB of swap to achieve this but I am still reading how swap fully works.
A key point is that not every memorys is swappable. Besides that, a more general article about swap which I personally found quite OK could be: https://chrisdown.name/2018/01/02/in-defence-of-swap.html
 
Hmm, too bad ballooning is not working quickly enough. Have you tried adding 24GB of swap inside each VM and using a 24G/12G setting? The OS inside the VM can make better decisions about what to swap (or free in case of file cache) than the Proxmox host. And it would give the VM a way to already allocate memory before ballooning releases memory to it. This might prevent OOM inside the VMs (and just make it slow, hopefully temporarily as ballooning rebalances). Even then 20 maybe be too much. It really depends on whether your applications inside the VMs can work with swap instead of real memory, whether they all need it at the same time and/or are erratic in memory allocation.
 
  • Like
Reactions: ph0x
What do you mean with real RAM usage? For the hypervisor it doesn't matter if the guest fills its RAM with actual data or uses it as cache.
As pointed out, VM RAM is not really swappable. How should the hypervisor know what can safely be swapped and what would better not be swapped?
Three people, including one staff, adviced to skip that idea. Now it's up to you to decide.
As in if I was to run free -m inside the VM it would show 5GB usage 19GB free and 5/24GB on the Proxmox GUI graph. Would it not know that by Ballooning being enabled? The guide says

"Memory ballooning (KVM only) allows you to have your guest dynamically change it’s memory usage by evicting unused memory during run time. It reduces the impact your guest can have on memory usage of your host by giving up unused memory back to the host."

The only time I see RAM being returned is if min is set lower than max or if swap kicks in on the node.

If you do not care for the computational overhead then you could also look into the use of the zram module, I made some good experience with it on very limited HW, but I only used it as last effort there, so do not take that as suggestion that it is production-ready/recommended.

What OS/Distro runs in the guest?

In general the answers of avw, namely ballooning and KSM (Kernel Same-Page Merging) are often of help in such cases.


A key point is that not every memorys is swappable. Besides that, a more general article about swap which I personally found quite OK could be: https://chrisdown.name/2018/01/02/in-defence-of-swap.html
I did take a quick look into zram but I don't understand it. Will continue to read.

I have KSM enabled already but it's not enough.

The virtual machines are Linux based

I know not all the memory can be swapped but I do run echo 3 > /proc/sys/vm/drop_caches frequently inside the virtual machines.

Hmm, too bad ballooning is not working quickly enough. Have you tried adding 24GB of swap inside each VM and using a 24G/12G setting? The OS inside the VM can make better decisions about what to swap (or free in case of file cache) than the Proxmox host. And it would give the VM a way to already allocate memory before ballooning releases memory to it. This might prevent OOM inside the VMs (and just make it slow, hopefully temporarily as ballooning rebalances). Even then 20 maybe be too much. It really depends on whether your applications inside the VMs can work with swap instead of real memory, whether they all need it at the same time and/or are erratic in memory allocation.
I have tried giving it a small amount of swap but the swap inside the virtual machine never gets used for the reasons above (usage is low, drop cache).
 
Last edited:
Sounds like you are running Linux in those VMs; why not use containers? They use a little less memory and the host is fully aware of memory usage and can make better choices about dropping caches or swapping. If swapping inside the VM does not even work, then swapping on the host is going to be even worse: the host does not know the purpose of the memory it is swapping and the VM has the mistaken conviction that it has all the real memory in the world. Once you need to go dropping caches all the time, I expect that you will never have happy VM users.
 
As in if I was to run free -m inside the VM it would show 5GB usage 19GB free and 5/24GB on the Proxmox GUI graph. Would it not know that by Ballooning being enabled? The guide says

"Memory ballooning (KVM only) allows you to have your guest dynamically change it’s memory usage by evicting unused memory during run time. It reduces the impact your guest can have on memory usage of your host by giving up unused memory back to the host."
You cited it yourself. Where do you read "the hypervisor knows about the guest's RAM layout"?

The only time I see RAM being returned is if min is set lower than max or if swap kicks in on the node.
Works as intended, I would say.

I have tried giving it a small amount of swap but the swap inside the virtual machine never gets used for the reasons above (usage is low, drop cache).
I would rather attach only half the RAM to the guests and grant them some swap than trying to swap on hypervisor level.
 
I will give an example maybe I have it all wrong. Lets say I create 10 virtual machines and for a few brief moments use the full 24GB RAM using some sort of benchmark tool and it scored 15GB/s. Now if it used the full 24GB RAM usage the nodes total RAM usage would remain at 240/256GB until those machines are fully turned off/on this is not acceptable as now they have finished running their tests they each are using less than 1GB. Swap will kick in and reduce the nodes total RSS memory.

Now if the swap was set really high for example vm.swappiness=90 I should end up with about 215GB swap. What if in that scenario I ran the benchmark tests again on 2-3 of the virtual machines would I get a much worse result as the memory that was committed to them should now be swapped.

Sounds like you are running Linux in those VMs; why not use containers? They use a little less memory and the host is fully aware of memory usage and can make better choices about dropping caches or swapping. If swapping inside the VM does not even work, then swapping on the host is going to be even worse: the host does not know the purpose of the memory it is swapping and the VM has the mistaken conviction that it has all the real memory in the world. Once you need to go dropping caches all the time, I expect that you will never have happy VM users.
I do not want to use LXC I know it will be better but for me it's not an option. But this happens already once swap kicks in? I am just doing it more frequently.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!