[PVE 6.0] Unable to set zfs_arc_max value below 4GBs

pve-Joseph

Member
Oct 21, 2019
10
0
6
37
Hi everyone,

My current proxmox version across 2 nodes, both nodes have 128 GB of DDR4 RAM.

root@pveant01:~# pveversion pve-manager/6.0-9/508dcee0 (running kernel: 5.0.21-3-pve)

I have two installations of Proxmox that I've been using for a while and that originally started on kernel 5.0.15-1-pve. The current configuration is as follows:

1. Boot drive (250GB) with XFS filesystem
2. Two ZFS pools as shown below...

root@pveant01:~# zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT zndisk01 1.86T 66.0G 1.79T - - 4% 3% 1.00x ONLINE - zsdisk01 3.72T 198G 3.53T - - 7% 5% 1.00x ONLINE -

I had originally set limits for ZFS ARC as shown below, and it was working prior to upgrading to pve 6.0.9 which includes kernel 5.0.21-3-pve.

root@pveant01:~# cat /etc/modprobe.d/zfs.conf # Minimum ZFS ARC : 1 GB options zfs zfs_arc_min=1073741824 # Maximum ZFS ARC : 2 GB options zfs zfs_arc_max=2147483648

Now, whenever my ZFS limits are configured to anything less than 2 GB (Min) and 4 GB (max) the arcstat ignores my module configuration parameters. Below is the arcstat readout after rebooting with the above 1 GB and 2 GB zfs limits despite the kernel parameters showing the correct values:

root@pveant01:~# arcstat time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c 08:59:50 0 0 0 0 0 0 0 0 0 44M 62G

Code:
root@pveant01:~# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min                           4    1073741824
c_max                           4    67548008448
arc_no_grow                     4    0
arc_tempreserve                 4    0
arc_loaned_bytes                4    0
arc_prune                       4    0
arc_meta_used                   4    28487504
arc_meta_limit                  4    50661006336
arc_dnode_limit                 4    5066100633
arc_meta_max                    4    68519832
arc_meta_min                    4    16777216
async_upgrade_sync              4    261
arc_need_free                   4    0
arc_sys_free                    4    2110875264
arc_raw_size                    4    0

Code:
root@pveant01:~# arc_summary

------------------------------------------------------------------------
ZFS Subsystem Report                            Mon Oct 21 09:13:41 2019
Linux 5.0.21-3-pve                                            0.8.2-pve1
Machine: pveant01 (x86_64)                                    0.8.2-pve1

ARC status:                                                      HEALTHY
        Memory throttle count:                                         0

ARC size (current):                                     0.1 %   44.7 MiB
        Target size (adaptive):                       100.0 %   62.9 GiB
        Min size (hard limit):                          1.6 %    1.0 GiB
        Max size (high water):                           62:1   62.9 GiB
        Most Frequently Used (MFU) cache size:          3.6 %    1.5 MiB
        Most Recently Used (MRU) cache size:           96.4 %   40.0 MiB
        Metadata cache size (hard limit):              75.0 %   47.2 GiB
        Metadata cache size (current):                  0.1 %   27.2 MiB
        Dnode cache size (hard limit):                 10.0 %    4.7 GiB
        Dnode cache size (current):                   < 0.1 %    1.1 MiB


#Kernel module parameters properly reflecting settings in zfs.conf

root@pveant01:~# cat /sys/module/zfs/parameters/zfs_arc_max 2147483648 root@pveant01:~# cat /sys/module/zfs/parameters/zfs_arc_min 1073741824
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Below are the readouts after changing the values in my zfs.conf to the following:

root@pveant01:~# cat /etc/modprobe.d/zfs.conf # Minimum ZFS ARC : 2 GB options zfs zfs_arc_min=2147483648 # Maximum ZFS ARC : 4 GB options zfs zfs_arc_max=4294967296


root@pveant01:~# arcstat time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c 09:21:04 0 0 0 0 0 0 0 0 0 44M 4.0G

Code:
root@pveant01:~# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min                           4    2147483648
c_max                           4    4294967296
arc_no_grow                     4    0
arc_tempreserve                 4    0
arc_loaned_bytes                4    0
arc_prune                       4    0
arc_meta_used                   4    28419016
arc_meta_limit                  4    3221225472
arc_dnode_limit                 4    322122547
arc_meta_max                    4    67454320
arc_meta_min                    4    16777216
async_upgrade_sync              4    248
arc_need_free                   4    0
arc_sys_free                    4    2110875264
arc_raw_size                    4    0

Code:
root@pveant01:~# arc_summary

------------------------------------------------------------------------
ZFS Subsystem Report                            Mon Oct 21 09:23:05 2019
Linux 5.0.21-3-pve                                            0.8.2-pve1
Machine: pveant01 (x86_64)                                    0.8.2-pve1

ARC status:                                                      HEALTHY
        Memory throttle count:                                         0

ARC size (current):                                     1.1 %   44.1 MiB
        Target size (adaptive):                       100.0 %    4.0 GiB
        Min size (hard limit):                         50.0 %    2.0 GiB
        Max size (high water):                            2:1    4.0 GiB
        Most Frequently Used (MFU) cache size:          1.9 %  801.0 KiB
        Most Recently Used (MRU) cache size:           98.1 %   40.2 MiB
        Metadata cache size (hard limit):              75.0 %    3.0 GiB
        Metadata cache size (current):                  0.9 %   27.1 MiB
        Dnode cache size (hard limit):                 10.0 %  307.2 MiB
        Dnode cache size (current):                     0.3 %    1.1 MiB

#Kernel module parameters still properly reflecting settings in zfs.conf

root@pveant01:~# cat /sys/module/zfs/parameters/zfs_arc_max
4294967296
root@pveant01:~# cat /sys/module/zfs/parameters/zfs_arc_min
2147483648

Soo....Is there something I'm missing as to why Proxmox's ZFS subsystem is not letting me set a zfs_arc_max value less than 4 GB of total system memory??
 
Last edited:
Hi,

did you run update-initramfs to activate it?
After update-initramfs you have to reboot.
By the way, 1-2 GB is too small. this will end in slow storage.
minium is 4GB
 
Hi,

did you run update-initramfs to activate it?
After update-initramfs you have to reboot.
By the way, 1-2 GB is too small. this will end in slow storage.
minium is 4GB

If I'm not mistaken and according to the documentation @ https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_limit_zfs_memory_usage
"update-initramfs -u" is only needed IF your root filesystem is ZFS. In my case as I mentioned, my root filesystem is on a separate 250GB drive formatted to XFS.

Is the documentation worded incorrectly? Should it always be ran to activate it even if your filesystem is not ZFS?

Also, I understand 1-2GB is small and will likely incur performance problems but I should be able to "shoot myself in the foot" as the saying goes if I so choose to.

Is 4GB a hard-limit imposed by Proxmox's implementation/tweak to ZFS on Linux (ZOL), if so it would be nice if it was documented somewhere.
From what I've outlined/described, it very much seems like 4GB is a hard-limit as I can't set it to anything less than that value.
 
I'm fairly certain I'm seeing something similar to this. I just did a fresh install from the latest Proxmox 6.0.1 iso and when configuring /etc/modprobe.d/zfs.conf with a 1GB min and 2GB max, then running pve-efiboot-tool refresh, I experienced the following:

# cat /etc/modprobe.d/zfs.conf # Minimum ZFS ARC : 1 GB options zfs zfs_arc_min=1073741824 # Maximum ZFS ARC : 2 GB options zfs zfs_arc_max=2147483648

Before running any updates, with zfs 0.8.1:


cat /proc/spl/kstat/zfs/arcstats ... c 4 2147483648 c_min 4 1073741824 c_max 4 2147483648 ...


arc_summary ... zfs_arc_max 2147483648 ... zfs_arc_min 1073741824 ...

However, after installing all updates, including the zfs update to 0.8.2, the same zfs.conf configuration produced the following:

cat /proc/spl/kstat/zfs/arcstats ... c 4 101403459584 c_min 4 1073741824 c_max 4 101403459584 ...

arc_summary ... zfs_arc_max 2147483648 ... zfs_arc_min 1073741824 ...

it seems like something in zfs 0.8.2 completely broke zfs_arc_max and thus it no longer controls c_max like it should/used to. I'm not sure if settings in /etc/modprobe.d/zfs.conf are considered "at runtime" under the the github zfs reported issue (https://github.com/zfsonlinux/zfs/issues/9487). I suppose I might try seeing if I can configure it in the kernel cmdline later but something definitely seems broken here.
 
As a follow up, setting the zfs_arc_max via the kernel cmdline didn't have any effect. Knowing that the zfs_arc_max functionality was working in a stock non-updated Proxmox 6.0 install, I tried downgrading libzfs2linux, zfs-initramfs, and zfsutils-linux to from 0.8.2 to 0.8.1. This also had no effect and arc_summary still reported a zfs version of 0.8.2. I then downgraded the kernel from 5.0.21-4 to 5.0.15-1 (I believe this is the kernel included in the PVE 6.0 iso). Downgrading the kernel resolved the issue and now c_max is correctly set based on the zfs_arc_max parameter when set in either a modprobe.d config file or via the kernel cmdline.

I'm guessing that perhaps the zfs functionality (or at least the part applicable to this issue) is baked into the pve kernel. I'm not entirely sure what the relationship between the kernel and the zfs packages are though.
 
Sure the problem is in the kernel module and not in the zfs tools.
You set the parameters to the module directly and not through the zfs tools.
 
Sure the problem is in the kernel module and not in the zfs tools.
You set the parameters to the module directly and not through the zfs tools.
Well, I understand it like, I should adjust it via /proc/spl/kstat/zfs/arcstats
My system is pve-manager/6.1-7/13e58d5e (running kernel: 5.3.18-2-pve)

Code:
root@node1: nano /proc/spl/kstat/zfs/arcstats

# I make edit, set up 8GB for c_min & c_max:

c_min                           4    8589934592
c_max                           4    8589934592

# Then I try save this file, but got message
# [ Error writing /proc/spl/kstat/zfs/arcstats: Permission denied ]

# And this is okay, just because this is system file and this is impossible edit, but anyway I did try:

# Check permissions:

root@node1:~# ls -l /proc/spl/kstat/zfs/arcstats
-rw-r--r-- 1 root root 0 Apr  8 21:59 /proc/spl/kstat/zfs/arcstats

# Made new persmissions:
root@node1: chmod 777 /proc/spl/kstat/zfs/arcstats

# but it doesn't gave me results and it same issue, Error writing to this file.

Guys, any ideas? How adjust this properties for ZFS without rolling back of Kernel?
 
Last edited:
Hi,
see here, this will help you.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!