Proxmox ZFS ARC cache max size (auto)configuration script

g777

New Member
Jun 13, 2024
14
6
3
"ZFS ate half of my RAM." - :eek:

That's pretty much how it started.
Diving deeper to this subject it appeared that it's very common problem. Here are few topics on this forum that I read to understand what's going with ram:
  • "Disable ZFS ARC or limiting it"
  • "zfs_arc_max does not seem to work"
  • "zfs_arc_max realtime"
  • "ZFS RAM usage more than 90% of the system"
  • "Recommended RAM for ZFS"
  • and more others
Solution is very simple, to create zfs module configuration file and set desired amount of memory, according to rule (4G + <amount of total TB in pools * 1GB>).
All this can be pretty confusing and it is pretty much "one time" operation, that requires ton of reading to be set properly.
To simplify matter I wrote script that does everything itself and allows either to set size manually or set recommended size automatically.

Here's repo with script.
https://github.com/geoai777/zfs_arc

I did initial testing and it should work fine, but if there will be any bugs, feel free to let me know.
 
Adding feature that will allow removal of existing zfs_arc_max line is up for discussion.

Also note, that evaluated cache size is calculated strictly by formula, based on sum of zfs storage pools, when recommended cache size takes in account minimum recommended amount of 8GB ram.
Manually any amount of cache can be set.
 
FYI, since v8.1 (Nov. 2023) new install limit zfs arc size to 10% of the host ram.
 
FYI, since v8.1 (Nov. 2023) new install limit zfs arc size to 10% of the host ram.
Thank you for information, is there any performance research or whitepaper to read on that subject?

Mine 8.2.4 was consuming 50% of host ram, so I decided to tune it a bit.
 
I wonder if there's real benchmark behind max ARC size recommendations.
  • Link above recommends: 2G + 1G*<per TB storage>;
  • Rule of thumb I found earlier recommended: 4G + 1G*<per TB storage>;
  • Oracle recommends (for Solaris though): 75% of memory on systems with less than 4 GB of memory;
    physmem (free memory after system and firmware is loaded) minus 1 GB on systems with greater than 4 GB of memory;
I'm starting think about making benchmark of different settings, but lacking hardware as of yet.
 
After several days testing, 4G + (1G * <per TB storage>) formula seems to perform better. And more so on weaker hardware.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!