Hello,
We've upgraded our PVE cluster of ndoes from 5.4 to 6.2 few weeks ago, we also upgraded our ZFS pool to the version 0.8.4. We also added an iSCSI storage (FreeNAS) which hosts now most of the storage.
I'm now seeing that the ZFS cache is like eating a little bit too much of RAM (this is certainly some paranoia).
On a 256GB RAM ECC server, it's eating around 100GB of RAM which leads to a usage of RAM more than 90% (including our VM's usage of course).
Curiously, I've never seen this behavior on 5.4 (as much as now). I just wanted to know, should I be worry that my system is using more than 90% of the RAM available in the server ? I know ZFS uses as much as possible, but...
Here is the arc_summary of one host :
Thanks for your hints!
We've upgraded our PVE cluster of ndoes from 5.4 to 6.2 few weeks ago, we also upgraded our ZFS pool to the version 0.8.4. We also added an iSCSI storage (FreeNAS) which hosts now most of the storage.
I'm now seeing that the ZFS cache is like eating a little bit too much of RAM (this is certainly some paranoia).
On a 256GB RAM ECC server, it's eating around 100GB of RAM which leads to a usage of RAM more than 90% (including our VM's usage of course).
Curiously, I've never seen this behavior on 5.4 (as much as now). I just wanted to know, should I be worry that my system is using more than 90% of the RAM available in the server ? I know ZFS uses as much as possible, but...
Here is the arc_summary of one host :
Code:
------------------------------------------------------------------------
ZFS Subsystem Report Tue Aug 11 07:37:55 2020
Linux 5.4.44-1-pve 0.8.4-pve1
Machine: athos (x86_64) 0.8.4-pve1
ARC status: HEALTHY
Memory throttle count: 0
ARC size (current): 84.4 % 106.2 GiB
Target size (adaptive): 84.4 % 106.2 GiB
Min size (hard limit): 6.2 % 7.9 GiB
Max size (high water): 16:1 125.8 GiB
Most Frequently Used (MFU) cache size: 57.1 % 55.9 GiB
Most Recently Used (MRU) cache size: 42.9 % 41.9 GiB
Metadata cache size (hard limit): 75.0 % 94.3 GiB
Metadata cache size (current): 10.8 % 10.2 GiB
Dnode cache size (hard limit): 10.0 % 9.4 GiB
Dnode cache size (current): 0.5 % 50.8 MiB
ARC hash breakdown:
Elements max: 31.7M
Elements current: 83.8 % 26.6M
Collisions: 1.2G
Chain max: 11
Chains: 6.3M
ARC misc:
Deleted: 2.9G
Mutex misses: 768.1k
Eviction skips: 954.7M
ARC total accesses (hits + misses): 13.0G
Cache hit ratio: 80.7 % 10.5G
Cache miss ratio: 19.3 % 2.5G
Actual hit ratio (MFU + MRU hits): 79.9 % 10.4G
Data demand efficiency: 86.4 % 3.9G
Data prefetch efficiency: 7.5 % 2.1G
Cache hits by cache type:
Most frequently used (MFU): 73.9 % 7.7G
Most recently used (MRU): 25.2 % 2.6G
Most frequently used (MFU) ghost: 0.4 % 45.1M
Most recently used (MRU) ghost: 1.1 % 119.8M
Cache hits by data type:
Demand data: 31.8 % 3.3G
Demand prefetch data: 1.5 % 160.3M
Demand metadata: 66.7 % 7.0G
Demand prefetch metadata: < 0.1 % 3.4M
Cache misses by data type:
Demand data: 21.0 % 525.5M
Demand prefetch data: 78.6 % 2.0G
Demand metadata: 0.2 % 5.0M
Demand prefetch metadata: 0.2 % 3.8M
DMU prefetch efficiency: 1.8G
Hit ratio: 20.3 % 364.2M
Miss ratio: 79.7 % 1.4G
L2ARC not detected, skipping section
...
Thanks for your hints!