[PVE 6.0] Cannot set ZFS arc_min and arc_max

Mino

Member
Dec 31, 2016
13
1
6
37
Hello,

Just installed fresh pve 6.0 on 2 servers, the 2 are identical :

root@proxmox02:~# pveversion
pve-manager/6.0-2/865bbe32 (running kernel: 5.0.15-1-pve)

I installed with ZFS for root partition :

root@proxmox02:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 236G 1.02G 235G - - 0% 0% 1.00x ONLINE -
stor-local-zfs 464G 1.38M 464G - - 0% 0% 1.00x ONLINE -

root@proxmox02:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 1.01G 228G 104K /rpool
rpool/ROOT 1.01G 228G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.01G 228G 1.01G /
rpool/data 112K 228G 112K /rpool/data
stor-local-zfs 1.22M 449G 96K /stor-local-zfs

When i set limits for ZFS ARC, update initramfs, and reboot, nothing change.
This was working on pve 5.4, but the difference is previously root partition was ext4 and now it's zfs.

root@proxmox02:~# cat /etc/modprobe.d/zfs.conf
# Minimum ZFS ARC : 512 MB
options zfs zfs_arc_min=536870912
# Maximum ZFS ARC : 4 GB
options zfs zfs_arc_max=4294967296

root@proxmox02:~# update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-5.0.15-1-pve

root@proxmox02:~# reboot

root@proxmox02:~# grep c_ /proc/spl/kstat/zfs/arcstats
c_min 4 1053766656
c_max 4 16860266496
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 0
arc_meta_used 4 92992024
arc_meta_limit 4 12645199872
arc_dnode_limit 4 1264519987
arc_meta_max 4 127125672
arc_meta_min 4 16777216
async_upgrade_sync 4 58
arc_need_free 4 0
arc_sys_free 4 526883328
arc_raw_size 4 0

root@proxmox02:~# arc_summary

------------------------------------------------------------------------
ZFS Subsystem Report Mon Jul 15 00:36:26 2019
Linux 5.0.15-1-pve 0.8.1-pve1
Machine: proxmox02 (x86_64) 0.8.1-pve1

ARC status: HEALTHY
Memory throttle count: 0

ARC size (current): 3.6 % 579.3 MiB
Target size (adaptive): 100.0 % 15.7 GiB
Min size (hard limit): 6.2 % 1005.0 MiB
Max size (high water): 16:1 15.7 GiB
Most Frequently Used (MFU) cache size: 32.0 % 180.3 MiB
Most Recently Used (MRU) cache size: 68.0 % 383.3 MiB
Metadata cache size (hard limit): 75.0 % 11.8 GiB
Metadata cache size (current): 0.7 % 88.7 MiB
Dnode cache size (hard limit): 10.0 % 1.2 GiB
Dnode cache size (current): 0.6 % 6.9 MiB

Am i missing something ?
 
I even tried removing the comments :

root@proxmox02:~# cat /etc/modprobe.d/zfs.conf
options zfs zfs_arc_min=536870912
options zfs zfs_arc_max=4294967296

Running update-initramfs in verbose mode shows the file is read :

root@proxmox02:~# update-initramfs -uv | grep zfs.conf
Adding config /etc/modprobe.d/zfs.conf

Still no change after reboot :

root@proxmox02:~# grep 'c_m[ia]' /proc/spl/kstat/zfs/arcstats
c_min 4 1053766656
c_max 4 16860266496
 
works here. please try first with runtime modification via /sys/module/zfs/parameters/zfs_arc_* ..

are you using EFI boot? if so, you need to sync the initrds after updating them, e.g. via "pve-efiboot-tool refresh".
 
This is not working :

root@proxmox02:~# grep 'c_m[ia]' /proc/spl/kstat/zfs/arcstats
c_min 4 1053766656
c_max 4 16860266496

root@proxmox02:~# cat /sys/module/zfs/parameters/zfs_arc_min
0
root@proxmox02:~# cat /sys/module/zfs/parameters/zfs_arc_max
0

root@proxmox02:~# echo 536870912 > /sys/module/zfs/parameters/zfs_arc_min
root@proxmox02:~# echo 4294967296 > /sys/module/zfs/parameters/zfs_arc_max

root@proxmox02:~# cat /sys/module/zfs/parameters/zfs_arc_min
536870912
root@proxmox02:~# cat /sys/module/zfs/parameters/zfs_arc_max
4294967296

root@proxmox02:~# grep 'c_m[ia]' /proc/spl/kstat/zfs/arcstats
c_min 4 1053766656
c_max 4 16860266496

root@proxmox02:~# arc_summary | grep -A3 'ARC size'
ARC size (current): 1.5 % 246.6 MiB
Target size (adaptive): 100.0 % 15.7 GiB
Min size (hard limit): 6.2 % 1005.0 MiB
Max size (high water): 16:1 15.7 GiB

root@proxmox02:~# pve-efiboot-tool refresh
Running hook script '/etc/kernel/postinst.d/zz-pve-efiboot'..
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.

root@proxmox02:~# efibootmgr
BootCurrent: 0008
Timeout: 1 seconds
BootOrder: 0008,0001,0004,0002,0000
Boot0000* proxmox
Boot0001* Hard Drive
Boot0002* UEFI: Built-in EFI Shell
Boot0004* Network Card
Boot0008* UEFI OS

root@proxmox02:~# lsmod | grep zfs
zfs 3440640 5
zunicode 331776 1 zfs
zlua 143360 1 zfs
zcommon 81920 1 zfs
znvpair 77824 2 zfs,zcommon
zavl 16384 1 zfs
icp 278528 1 zfs
spl 106496 5 zfs,icp,znvpair,zcommon,zavl
Don't know what else I can do. All packages are up to date.
 
Ok i found the solution reading your embedded doc in https://my_ip:8006/pve-docs/chapter-sysadmin.html#sysboot
I had to run pve-efiboot-tool init /dev/nvme0n1p2 to initialize the ESP partition.
Don't know why it wasn't already initialized during installation.

Hope it helps someone :


root@proxmox02:~# pve-efiboot-tool init /dev/nvme0n1p2
Re-executing '/usr/sbin/pve-efiboot-tool' in new private mount namespace..
UUID="4D6A-6280" SIZE="536870912" FSTYPE="vfat" PARTTYPE="c12a7328-f81f-11d2-ba4b-00a0c93ec93b" PKNAME="nvme0n1" MOUNTPOINT=""
Mounting '/dev/nvme0n1p2' on '/var/tmp/espmounts/4D6A-6280'.
Installing systemd-boot..
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/4D6A-6280/EFI/systemd/systemd-bootx64.efi".
Copied "/usr/lib/systemd/boot/efi/systemd-bootx64.efi" to "/var/tmp/espmounts/4D6A-6280/EFI/BOOT/BOOTX64.EFI".
Created EFI boot entry "Linux Boot Manager".
Configuring systemd-boot..
Unmounting '/dev/nvme0n1p2'.
Adding '/dev/nvme0n1p2' to list of synced ESPs..
Refreshing kernels and initrds..
Running hook script '/etc/kernel/postinst.d/zz-pve-efiboot'..
Copying and configuring kernels on /dev/disk/by-uuid/4D6A-6280
Copying kernel and creating boot-entry for 5.0.15-1-pve

root@proxmox02:~# reboot


root@proxmox02:~# arc_summary | grep -A3 'ARC size'

ARC size (current): 5.5 % 223.2 MiB
Target size (adaptive): 100.0 % 4.0 GiB
Min size (hard limit): 12.5 % 512.0 MiB
Max size (high water): 8:1 4.0 GiB
 
  • Like
Reactions: Shaaarnir
After upgrade ZFS 0.8 works differently vs 0.7. I set ARC size with echo parameter and new changes in 0.8 is effected not immediate.
 
After upgrade ZFS 0.8 works differently vs 0.7. I set ARC size with echo parameter and new changes in 0.8 is effected not immediate.

please provide the exact commands and values before/after..
 
Current ARC

Code:
# arc_summary

------------------------------------------------------------------------
ZFS Subsystem Report                            Thu Aug 08 10:27:10 2019
Linux 5.0.18-1-pve                                            0.8.1-pve1
Machine: nmz-lt (x86_64)                                      0.8.1-pve1

ARC status:                                                      HEALTHY
        Memory throttle count:                                         0

ARC size (current):                                    99.0 %   11.9 GiB
        Target size (adaptive):                       100.0 %   12.0 GiB
        Min size (hard limit):                        100.0 %   12.0 GiB
        Max size (high water):                            1:1   12.0 GiB
        Most Frequently Used (MFU) cache size:         44.8 %    4.9 GiB
        Most Recently Used (MRU) cache size:           55.2 %    6.1 GiB
        Metadata cache size (hard limit):              30.0 %    3.6 GiB
        Metadata cache size (current):                 77.0 %    2.8 GiB
        Dnode cache size (hard limit):                 10.0 %  368.6 MiB
        Dnode cache size (current):                    23.1 %   85.1 MiB


Making lower min/max 5/6:
Code:
echo 5368709120 > /sys/module/zfs/parameters/zfs_arc_min
echo 6442450944 > /sys/module/zfs/parameters/zfs_arc_max

ARC report

Code:
# arc_summary

------------------------------------------------------------------------
ZFS Subsystem Report                            Thu Aug 08 10:29:52 2019
Linux 5.0.18-1-pve                                            0.8.1-pve1
Machine: nmz-lt (x86_64)                                      0.8.1-pve1

ARC status:                                                      HEALTHY
        Memory throttle count:                                         0

ARC size (current):                                    99.1 %   11.9 GiB
        Target size (adaptive):                       100.0 %   12.0 GiB
        Min size (hard limit):                        100.0 %   12.0 GiB
        Max size (high water):                            1:1   12.0 GiB
        Most Frequently Used (MFU) cache size:         44.6 %    4.9 GiB
        Most Recently Used (MRU) cache size:           55.4 %    6.1 GiB
        Metadata cache size (hard limit):              30.0 %    3.6 GiB
        Metadata cache size (current):                 76.7 %    2.8 GiB
        Dnode cache size (hard limit):                 10.0 %  368.6 MiB
        Dnode cache size (current):                    23.0 %   84.9 MiB

Code:
grep . /sys/module/zfs/parameters/zfs_arc_*
...
/sys/module/zfs/parameters/zfs_arc_min:5368709120
/sys/module/zfs/parameters/zfs_arc_max:6442450944
...


.....

After 3 hours.....

Code:
# arc_summary

------------------------------------------------------------------------
ZFS Subsystem Report                            Thu Aug 08 13:24:37 2019
Linux 5.0.18-1-pve                                            0.8.1-pve1
Machine: nmz-lt (x86_64)                                      0.8.1-pve1

ARC status:                                                      HEALTHY
        Memory throttle count:                                         0

ARC size (current):                                    28.5 %    1.7 GiB
        Target size (adaptive):                       100.0 %    6.0 GiB
        Min size (hard limit):                         83.3 %    5.0 GiB
        Max size (high water):                            1:1    6.0 GiB
        Most Frequently Used (MFU) cache size:         51.0 %  848.4 MiB
        Most Recently Used (MRU) cache size:           49.0 %  813.7 MiB
        Metadata cache size (hard limit):              30.0 %    1.8 GiB
        Metadata cache size (current):                 71.8 %    1.3 GiB
        Dnode cache size (hard limit):                 10.0 %  184.3 MiB
        Dnode cache size (current):                    13.6 %   25.1 MiB
 
dropping the ARC can take a while, since it will only shrink on pressure and not just because you set a limit. if you want a persistent lower limit, set it persistently and wait or reboot.
 
As I told in 0.7 changing ARC size its almost immediately. After this I set ARC size back with echo to 12G but ARC stuck with 5/6. 4 hours passed. I think some new settings must be involved with that.
 
Here is the full list of commands to update zfs_arc_max if you're using EFI boot for anyone Googling this in the future:

Code:
# cat /etc/modprobe.d/zfs.conf
# 8GB
options zfs zfs_arc_max=8589934592

# update-initramfs -u
update-initramfs: Generating /boot/initrd.img-5.0.15-1-pve

# pve-efiboot-tool refresh 
Running hook script 'pve-auto-removal'..
Running hook script 'zz-pve-efiboot'..
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
Copying and configuring kernels on /dev/disk/by-uuid/0242-D2D0
        Copying kernel and creating boot-entry for 5.0.15-1-pve
Copying and configuring kernels on /dev/disk/by-uuid/0243-232E
        Copying kernel and creating boot-entry for 5.0.15-1-pve

# reboot

# arc_summary
ARC size (current):                                     4.1 %  337.2 MiB
        Target size (adaptive):                       100.0 %    8.0 GiB
 
the documentation is correct - unless you are using an outdated version of pve-kernel-helper, update-initramfs -u is enough to trigger a resync after initrams generation.
 
Hi everyone. I have the same problem on
zfs version
zfs-0.8.5-pve1
pveversion
pve-manager/6.3-3/eee5f901 (running kernel: 5.4.78-2-pve)

cat /etc/modprobe.d/zfs.conf
options zfs zfs_arc_max=1737746

Target size (adaptive): 100.0 % 3.3 GiB

It is a zfs bug ?
 
You are trying to limit your ARC to 1.7MB? You know that the rule of thumb is 4GB + 1GB RAM per 1TB of raw disk capacity for the ARC?
Did you tried something more realisitc like options zfs zfs_arc_max=4294967296 (for 4GB) too?
 
You can force drop your caches, per my instructions:

"Why isnt the arc_max setting honoured on ZFs on Linux" https://serverfault.com/a/833338/261576

Dropping arc cache too low results in a garbage-collection like situation from time to time with a zfs_arc* process consuming lots of cpu and lots of disk IO for 30-180s at a time, with horrible iowait results for processes. Never go too low. "per tb disk" isnt a great way to calculate, number of IOps across different areas of disk is far more relevant, different workloads will cause different patterns, you'll soon find out if it's not enough ram (also consider ZIL, I've had little success with l2arc though).
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!