[SOLVED] Proxmox - RAM usage

listhor

New Member
Nov 14, 2023
19
0
1
How much in theory PVE 8 requires RAM for its own usage? It runs on 1 TB SSD with ZFS. If I remember it correctly, the rule of thumb says: 1TB of storage equals to 1GB of RAM usage for ZFS based system...
I have on server with 32 GB of RAM and mentioned above drive installed Proxmox 8 with 3x Ubuntu and PBS:
- VM1 with 14GB RAM
- VM2 with 6GB RAM
- VM3 with 7,5GB RAM
- PBS with 2GB RAM

KSM does good job and shares around 6 to 8 GB. But looking at total amount of RAM I need to adjust these settings and I would like to know how much of RAM to be left for PVE itself before oom kicks in.
Or maybe I could have set in sysctl following (?):

vm.overcommit_memory = 0
vm.overcommit_ratio = 80

Current memory usage:
Code:
cat /proc/meminfo
MemTotal:       32616592 kB
MemFree:          637952 kB
MemAvailable:    1214360 kB
Buffers:           11296 kB
Cached:          1028992 kB
SwapCached:            0 kB
Active:         18116208 kB
Inactive:        3911916 kB
Active(anon):   17991804 kB
Inactive(anon):  3047360 kB
Active(file):     124404 kB
Inactive(file):   864556 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Zswap:                 0 kB
Zswapped:              0 kB
Dirty:                92 kB
Writeback:             0 kB
AnonPages:      20988140 kB
Mapped:           109472 kB
Shmem:             51276 kB
KReclaimable:      57796 kB
Slab:            1581628 kB
SReclaimable:      57796 kB
SUnreclaim:      1523832 kB
KernelStack:       11232 kB
PageTables:        70472 kB
SecPageTables:     46676 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    16308296 kB
Committed_AS:   34020768 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      462348 kB
VmallocChunk:          0 kB
Percpu:            16064 kB
HardwareCorrupted:     0 kB
AnonHugePages:    808960 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
FileHugePages:         0 kB
FilePmdMapped:         0 kB
Unaccepted:            0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB
DirectMap4k:      371688 kB
DirectMap2M:    13021184 kB
DirectMap1G:    19922944 kB

Any recommendations?
 
I would tend to solve the problem sustainably and not just fight the symptoms. In short, “a lot helps a lot”, so add more RAM and the issue is over.

I would always allow for 4 GB of RAM for the OS, and 2 GB of RAM would certainly do the trick - but I run something like this commercially, so I don't want to risk a failure of the productive environment because of 2 GB of RAM.

Ultimately, 1 GB of RAM per TB of storage is just a guideline, how closely you stick to it is up to you. In commercial terms, I would definitely take that as a given and build on it.

The other alternative would be, perhaps you can move services to LXC or take away some RAM from the VMs. Another optimization might be ballooning.
But with KSM you have to be careful, if any strange error occurs, it can cause KSM to stop working and the node to go straight to OOM.
 
All VMs run with ballooning enabled.PVE runs on OVH bare metal server and I can't change specs only migrate to higher tier, I guess.
So for now I would like to try some optimization. Last night (first time) 14GB VM was "killed" and maybe there was something wrong with KSM - can't find logs. Where should I look for them?
I don't think I would have gained much turning one of VM into lxc - and anyway they are used as docker hosts (2 out of 3 VMs).
As memory ComitLimit is 16GB, it would have been nice to know :) how system heuristically (vm.overcommit_memory = 0) calculates overcommitment. Will I make it less stable by setting vm.overcommit_ratio = 80 (default 50)?
 
All VMs run with ballooning enabled.
So, all VMs are started with their maximum value of RAM and only reduced if necessary and if allowed by the guest. Have you checked the ram usage inside of the guests?

Have you limited the ZFS ARC? If not, it can grow up to half of your 32 GB, so 16 GB and may result in OOM of VMs if the ARC memory is not released quickly (which is often the case).
 
Have you limited the ZFS ARC?
No, but seems like a good idea. Currently is (8,5GB):
Code:
cat /proc/spl/kstat/zfs/arcstats

c                               4    9053419318
c_min                           4    1043730944
c_max                           4    16699695104
size                            4    9005726208
where:
  • c is the target size of the ARC in bytes
  • c_max is the maximum size of the ARC in bytes
  • size is the current size of the ARC in bytes
Is following correct way of changing it(?):
- to place
Code:
# Set Max ARC size => 2GB == 2147483648 Bytes
options zfs zfs_arc_max=2147483648
# Set Min ARC size => 1GB == 1073741824
options zfs zfs_arc_min=1073741824
in
/etc/modprobe.d/zfs.conf
I've found it in https://www.cyberciti.biz/faq/how-to-set-up-zfs-arc-size-on-ubuntu-debian-linux/
 
Thanks! I had missed that somehow.
So, I've ended up with 2GB (min to be sure it stays there) and 3GB limits.
Before:
Code:
awk '/^size/ { print $1 " " $3 / 1048576 }' < /proc/spl/kstat/zfs/arcstats
size 8593.04
For current session:
Code:
echo "$[3 * 1024*1024*1024]" >/sys/module/zfs/parameters/zfs_arc_max
echo "$[2 * 1024*1024*1024]" >/sys/module/zfs/parameters/zfs_arc_min
Now:
Code:
awk '/^size/ { print $1 " " $3 / 1048576 }' < /proc/spl/kstat/zfs/arcstats
size 3056.41
And to stay persistent I placed in /etc/modprobe.d/zfs.conf:
Code:
options zfs zfs_arc_max=3221225472
options zfs zfs_arc_min=2147483648
followed by: update-initramfs -u -k all

And now I need to observe I/O performance/delay. But I hope won't be affected much.
Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!