sysctl -w vm.swappiness=10
command. Some more information about the "swappiness" values and their significance is available in the manual [1].I use 93% in ram and use 10 vmHi,
can also show your RAM usage and maybe give some more details such as how many VMs/CTs are running and how much memory they use? Also do you use zfs?
Generally you could try to reduce "swappiness". This will make the kernel use swap space less aggressively. You can do this with thesysctl -w vm.swappiness=10
command. Some more information about the "swappiness" values and their significance is available in the manual [1].
[1]: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#zfs_swap
This depends on what you want:Is there a solution to this?
can't buy more ram I have 128 GBThis depends on what you want:
- buy more ram
- configure more ram
- run less VMs or with less rman each.
Why not? Is this an entry level XEON with a max of 128 GB? Then go and buy mid tier server hardware. The maximum amount of ram is 4-8 TB.can't buy more ram I have 128 GB
That will help, of course. But in the end, your machine is not big enough for running your workload. Increasing Swap will not increase performance.there a person tell me I want Enabling zram is this right ?
I think this will only buy the author of this thread some time.Would it help to enable KSM sharing ?
It will take a hit on the CPU, but also it will "free" some RAM + Swap.
I usually set the swappiness to 0. then it won't swap at all except there is a case where it would need to otherwise kill guest guests. So no swap performance penalty and no additional SSD wear but still OOM protection.All my nodes swap, even with tons of RAM available.
That also really depends on the use case. When you can't trust the guests then it's recommended to disable KSM, as KSM will weaken the isolation. And how much KSM will save also really depends on the running guests. Without ZFS (as host's ARC and page file caching of the guests often cache the same data in RAM) or a lot of VMs running totally different services or OSs, the RAM savings might not be that high.Would it help to enable KSM sharing ?
It will take a hit on the CPU, but also it will "free" some RAM + Swap.
I set it to 10 on my nodes.I usually set the swappiness to 0. then it won't swap at all except there is a case where it would need to otherwise kill guest guests. So no swap performance penalty and no additional SSD wear but still OOM protection.
Are you using ZFS?I set it to 10 on my nodes.
I am, and have it capped to 16G max on ARC.Are you using ZFS?
The ARC usage does not show up in the UI, hence you might have less RAM available than it is obvious.
I always thought that the ARC was unfortunately counted as used memory, in contrast to normal Linux filesystem cache.The ARC usage does not show up in the UI, hence you might have less RAM available than it is obvious.
I am, and have it capped to 16G max on ARC.
That can easily be checked within which case I don't have an answer for your observation. Except the limit is not correctly honored (had this once...).
arc_summary
.arc_summary:About 2% of your memory is in swap and IO delay is low, so maybe there is no problem? Some memory has been swapped out but your system does not appear to be heavily reading from swap Therefore it looks like swap just gave you 8 more GB to use for other stuff. Are you experiencing problems (beside the red color of the 100% swap usage)?
ARC size (current): < 0.1 % 17.5 KiB
Target size (adaptive): 73.8 % 11.8 GiB
Min size (hard limit): 50.0 % 8.0 GiB
Max size (high water): 2:1 16.0 GiB
How much it swaps is a complicated interaction the processes and VMs that are running, their memory allocations, file I/O and probably other factors (some of which can be tuned).No problems, yet, just can't explain why it is happening when it shouldn't be.