We have the same situation after upgrade 3 nodes cluster to 7.4 from 6.4 and increasing ram on node 1 with additional 128GB. Is there any solution? We change parameters in systemct to vm.swappiness=10.
It seems to me like there is an excessive amount of swap configured. Anything above ~16GB of RAM on the host level, you don't want your server to swap much at all.
In my /etc/rc.local with zram enabled + 2GB of LVM swap (16GB host RAM):
echo 1 > /proc/sys/vm/swappiness
# Try to keep at least 100MB of free RAM at all times echo 100000 > /proc/sys/vm/min_free_kbytes
# Default 100 - try more aggressively to reclaim inodes, etc from cache echo 160 > /proc/sys/vm/vfs_cache_pressure
I'm also limiting ZFS ARC usage to 1573741824 bytes (~1.5GB), and have LVM + ext4 root.
Right now I have (5) VMs running and 2 containers, and (54MB) of 4GB swap usage:
Code:
swapon -s;free -h
Filename Type Size Used Priority
/dev/dm-0 partition 2097148 0 -2
/dev/zram0 partition 2097148 55296 10
total used free shared buff/cache available
Mem: 15Gi 11Gi 2.4Gi 48Mi 1.7Gi 3.7Gi
Swap: 4.0Gi 54Mi 3.9Gi
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.