RAM full?!

openaspace

Active Member
Sep 16, 2019
483
10
38
Italy
Hello I use proxmox with zfs on 32gb ram server on hetzner.

I don't understand why even the server run only 14 gb of ram of virtuals machines the server report over 80% ram usage on 32 gb availables.

Today I have added a wordpress turnkey instance with only 512mb ram and started a load test for a website that will receive around 2000 visitors within an hour to verify if it will works .. and the system was going out of memory.

Received From: px3->/var/log/syslog
Rule: 5108 fired (level 12) -> "System running out of memory. Availability of the system is in risk."
Portion of the log(s):

Sep 17 11:48:34 px3 kernel: [658351.964844] Memory cgroup out of memory: Killed process 12131 (mysqld) total-vm:2789416kB, anon-rss:73236kB, file-rss:9352kB, shmem-rss:0kB, UID:100104 pgtables:528kB oom_score_adj:0

px3-Proxmox-Virtual-Environment (1).png
 
Afaik ZFS by default pulls up to 50℅ of system memory for ARC - so you should look into that direction imho.
You can limit the RAM usage by some kernel parameters. I've used 8 GB on 64 GB ram an 16 GB on 128 GB ram. Not using dedup though...
 
The more RAM you use the faster your server will be. I would suggest testing 6 or 8GB. You can look at the ZFS ARC cache hit ratio. If your ARC is too small your ratio will go down. I'm using 8GB and got a hit ratio of around 99%.

Look here how to limit ARC size.
 
If a system is swaping it means the memory sizing is inappropriate or something totally went wrong. A hypervisor should never ever be experiencing this...

Size of ARC cache really depends on what you do with it and also the cache to storage ratio. The more storage, the more cache. That's my rule of thumb.

Applying my own calc I would start with 4 GB of ARC memory on 32 GB Ram. Try it out and monitor things so you can adjust.
 
I do have a bit of a problem with that statement ;)

Sorry, I was nor precise enough: Continuous swap in and out is bad, not swapping or having swap in general. A full swap often indicates a lot of swap in / swap out - at least in the scenarios I encountered.

If a system is swaping it means the memory sizing is inappropriate or something totally went wrong. A hypervisor should never ever be experiencing this...

The problem is the human error: If you inappropriately size a container wrong, you will have swapping and you cannot do anything about it on the hypervisor besides as a human correct the memory consumption.
 
So..
I'm using exactly now 13536 MB of ram for vps on a host with 32gb of ram.. and the host logs says me is out of memory...
this not normal...
px3-Proxmox-Virtual-Environment (2).png
 
I'm using 8GB and got a hit ratio of around 99%.
Bash:
zfs get all | grep compressratio
rpool                                     compressratio         1.03x                          -
rpool                                     refcompressratio      1.00x                          -
rpool/ROOT                                compressratio         1.00x                          -
rpool/ROOT                                refcompressratio      1.00x                          -
rpool/ROOT/pve-1                          compressratio         1.00x                          -
rpool/ROOT/pve-1                          refcompressratio      1.00x                          -
rpool/data                                compressratio         1.04x                          -
rpool/data                                refcompressratio      1.00x                          -
rpool/data/base-106-disk-0                compressratio         1.27x                          -
rpool/data/base-106-disk-0                refcompressratio      1.27x                          -
rpool/data/base-106-disk-0@__base__       compressratio         1.27x                          -
rpool/data/base-106-disk-0@__base__       refcompressratio      1.27x                          -
rpool/data/subvol-101-disk-0              compressratio         1.09x                          -
rpool/data/subvol-101-disk-0              refcompressratio      1.09x                          -
rpool/data/subvol-104-disk-0              compressratio         1.88x                          -
rpool/data/subvol-104-disk-0              refcompressratio      1.88x                          -
rpool/data/subvol-107-disk-0              compressratio         1.00x                          -
rpool/data/subvol-107-disk-0              refcompressratio      1.00x                          -
rpool/data/subvol-111-disk-0              compressratio         1.84x                          -
rpool/data/subvol-111-disk-0              refcompressratio      1.84x                          -
rpool/data/subvol-112-disk-0              compressratio         1.70x                          -
rpool/data/subvol-112-disk-0              refcompressratio      1.70x                          -
rpool/data/subvol-113-disk-0              compressratio         1.97x                          -
rpool/data/subvol-113-disk-0              refcompressratio      1.97x                          -
rpool/data/subvol-113-disk-0@temi_test    compressratio         1.90x                          -
rpool/data/subvol-113-disk-0@temi_test    refcompressratio      1.90x                          -
rpool/data/subvol-114-disk-0              compressratio         1.86x                          -
rpool/data/subvol-114-disk-0              refcompressratio      1.84x                          -
rpool/data/subvol-114-disk-0@ok_no_geo    compressratio         1.85x                          -
rpool/data/subvol-114-disk-0@ok_no_geo    refcompressratio      1.85x                          -
rpool/data/vm-100-disk-0                  compressratio         1.35x                          -
rpool/data/vm-100-disk-0                  refcompressratio      1.35x                          -
rpool/data/vm-100-disk-0@https_ok         compressratio         1.36x                          -
rpool/data/vm-100-disk-0@https_ok         refcompressratio      1.36x                          -
rpool/data/vm-100-state-https_ok          compressratio         1.38x                          -
rpool/data/vm-100-state-https_ok          refcompressratio      1.38x                          -
rpool/data/vm-102-disk-0                  compressratio         1.00x                          -
rpool/data/vm-102-disk-0                  refcompressratio      1.00x                          -
rpool/data/vm-103-disk-0                  compressratio         1.57x                          -
rpool/data/vm-103-disk-0                  refcompressratio      1.60x                          -
rpool/data/vm-103-disk-0@pre_upgrade_2    compressratio         1.63x                          -
rpool/data/vm-103-disk-0@pre_upgrade_2    refcompressratio      1.63x                          -
rpool/data/vm-103-disk-0@crush_update_ok  compressratio         1.61x                          -
rpool/data/vm-103-disk-0@crush_update_ok  refcompressratio      1.61x                          -
rpool/data/vm-103-disk-2                  compressratio         1.00x                          -
rpool/data/vm-103-disk-2                  refcompressratio      1.00x                          -
rpool/data/vm-103-disk-2@crush_update_ok  compressratio         1.00x                          -
rpool/data/vm-103-disk-2@crush_update_ok  refcompressratio      1.00x                          -
rpool/data/vm-103-state-crush_update_ok   compressratio         1.11x                          -
rpool/data/vm-103-state-crush_update_ok   refcompressratio      1.11x                          -
rpool/data/vm-105-disk-0                  compressratio         1.08x                          -
rpool/data/vm-105-disk-0                  refcompressratio      1.08x                          -
rpool/data/vm-108-disk-0                  compressratio         1.45x                          -
rpool/data/vm-108-disk-0                  refcompressratio      1.39x                          -
rpool/data/vm-108-disk-0@jitsi_ok         compressratio         1.40x                          -
rpool/data/vm-108-disk-0@jitsi_ok         refcompressratio      1.40x                          -
rpool/data/vm-108-state-jitsi_ok          compressratio         1.43x                          -
rpool/data/vm-108-state-jitsi_ok          refcompressratio      1.43x                          -
rpool/data/vm-109-disk-0                  compressratio         1.62x                          -
rpool/data/vm-109-disk-0                  refcompressratio      1.59x                          -
rpool/data/vm-109-disk-0@pre_upgrade      compressratio         1.76x                          -
rpool/data/vm-109-disk-0@pre_upgrade      refcompressratio      1.76x                          -
rpool/data/vm-109-disk-0@working_ok       compressratio         1.58x                          -
rpool/data/vm-109-disk-0@working_ok       refcompressratio      1.58x                          -
rpool/data/vm-109-state-pre_upgrade       compressratio         1.76x                          -
rpool/data/vm-109-state-pre_upgrade       refcompressratio      1.76x                          -
rpool/data/vm-109-state-working_ok        compressratio         1.70x                          -
rpool/data/vm-109-state-working_ok        refcompressratio      1.70x                          -
rpool/data/vm-110-disk-0                  compressratio         1.08x                          -
rpool/data/vm-110-disk-0                  refcompressratio      1.08x                          -
rpool/data/vm-200-disk-0                  compressratio         1.16x                          -
rpool/data/vm-200-disk-0                  refcompressratio      1.15x                          -
rpool/data/vm-200-disk-0@ok               compressratio         1.16x                          -
rpool/data/vm-200-disk-0@ok               refcompressratio      1.16x                          -
rpool/data/vm-200-state-ok                compressratio         1.33x                          -
rpool/data/vm-200-state-ok                refcompressratio      1.33x
 
In my system the /etc/modprobe.d/zfs.conf don't exist.. it's normal?!

Look here how to limit ARC size.

Limit ZFS Memory Usage
It is good to use at most 50 percent (which is the default) of the system memory for ZFS ARC to prevent performance shortage of the host. Use your preferred editor to change the configuration in /etc/modprobe.d/zfs.conf and insert:
options zfs zfs_arc_max=8589934592
This example setting limits the usage to 8GB.
If your root file system is ZFS you must update your initramfs every time this value changes:
# update-initramfs -u
 
  • Like
Reactions: pablopol24
If its not there, create it and use the settings as described.
Its been a while when I initially set my system.up so I am not sure how it looked for me.
 
I think I needed to create it first.

Dunuin said:
I'm using 8GB and got a hit ratio of around 99%.
How to verify the ratio?
Try "cat /proc/spl/kstat/zfs/arcstats"

This way you get ARC "hits" and "misses" and you can calculate the hit ratio of it.
You also get the min arc size (c_min), the max arc size (c_max) and the actual size of your arc.
 
Last edited:
Do not use ZFS on proxmox Hypervisor... Use Thin LUN and give block device to your guest withouth any cache after that if you want compressind or dedup use on your guest.... If you need RAID system on Hypervisor then make that raid with LVM...
 
Do not use ZFS on proxmox Hypervisor...
Sorry, but what a bold statement.
ZFS has its advantages and its disadvantages like anything else.
Wonder how you would implement checksum protection on LVM as an example?
I happily give my RAM away for the ensurance my data is safely stored and protected against bitrot etc.
 
Sorry, but what a bold statement.
ZFS has its advantages and its disadvantages like anything else.
Wonder how you would implement checksum protection on LVM as an example?
I happily give my RAM away for the ensurance my data is safely stored and protected against bitrot etc.

I don't want to get into an argument, that is your selection and that is all...

but for your question.. LVM is a block device so what you want you can make on your guest operating system, this way never effect your general services quality..

For ZFS or any other cache/buffer based disk system, you can not be calculate correctly your system services level so if anyone have any concern about data; basic way use ZFS inside your guest operation system and use Hypervizor disk withouth buffer/cache system and use your guest system RAM for cache/buffer..
 
The more RAM you use the faster your server will be. I would suggest testing 6 or 8GB. You can look at the ZFS ARC cache hit ratio. If your ARC is too small your ratio will go down. I'm using 8GB and got a hit ratio of around 99%.

Look here how to limit ARC size.
But when zfs ram is limited,
The lxc ram performances are not influenced?
 
I don't see how limiting the ARC size should influence the RAM performance at all. ARC uses RAM to speed up the HDD/SSD speed and latency. The smaller your ARC size is, the less you are using your RAM, so other stuff like LXCs that uses the RAM got more capacity/bandwidth because it is less used.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!