PVE cluster, ZFS ram usage differences

Hi,

I have a 3-node cluster with identical nodes, but zfs is using up much different ram on different nodes:

Node: 128RAM, 2x500GB ssd's for PVE OS.

Node1 free -mh:

Code:
               total        used        free      shared  buff/cache   available
Mem:           125Gi        57Gi        61Gi        67Mi       6.5Gi        67Gi
Swap:             0B          0B          0B

Node2 free -mh:

Code:
               total        used        free      shared  buff/cache   available
Mem:           125Gi        24Gi        95Gi        70Mi       5.9Gi        99Gi

Node1 arc_summary:

Code:
ARC status:                                                      HEALTHY
        Memory throttle count:                                         0

ARC size (current):                                    53.5 %   33.6 GiB
        Target size (adaptive):                        62.9 %   39.6 GiB
        Min size (hard limit):                          6.2 %    3.9 GiB
        Max size (high water):                           16:1   62.9 GiB
        Most Frequently Used (MFU) cache size:          1.1 %  365.3 MiB
        Most Recently Used (MRU) cache size:           98.9 %   33.2 GiB
        Metadata cache size (hard limit):              75.0 %   47.2 GiB
        Metadata cache size (current):                  0.5 %  226.3 MiB
        Dnode cache size (hard limit):                 10.0 %    4.7 GiB
        Dnode cache size (current):                     0.6 %   28.6 MiB

Node2 arc_summary:

Code:
ARC status:                                                      HEALTHY
        Memory throttle count:                                         0

ARC size (current):                                     8.9 %    5.6 GiB
        Target size (adaptive):                         9.2 %    5.8 GiB
        Min size (hard limit):                          6.2 %    3.9 GiB
        Max size (high water):                           16:1   62.9 GiB
        Most Frequently Used (MFU) cache size:          3.1 %  178.3 MiB
        Most Recently Used (MRU) cache size:           96.9 %    5.4 GiB
        Metadata cache size (hard limit):              75.0 %   47.2 GiB
        Metadata cache size (current):                  0.3 %  124.1 MiB
        Dnode cache size (hard limit):                 10.0 %    4.7 GiB
        Dnode cache size (current):                     0.4 %   19.9 MiB

Only difference, is Node1 has more vm backups and isos stored in local.

Is there any way to check and understand this behaviour?
 
Do you replicate the ZFS pools between the nodes? If you for example would replicate node 1 to node 2 I could think that node1 is caching more in ARC because it needs to read more data because of the replication.
 
I don't see the problem. Free RAM is wasted RAM, so its a good thing when the ARC uses 34GB for caching if not needed by other processes. You only need to limit the ARC size when you see OOM Killer messages in case the ARC can't be shrunk fast enough.
 
Last edited:
Hi,

It is sufficient to only list what backups or iso do you have on the node1 and your ram will be grow in size. Big files, so a lot of metadata. Your output show under current metadata more ram usage on 1st node.

Good luck / Bafta!
 
I just can't understand the different behaviour on the two nodes.
If you really, really want to understand it, install a kernel probe and see what is doing on each node and that is the really scientific method and will go down the rabbit hole hundreds of levels.

Otherwise: Those two system don't do the same work, so they don't cache the same things. I don't think that replication will have an influence on the caching, it would poison the cache if a system is hugely doing replication.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!