Hi everyone,
My current proxmox version across 2 nodes, both nodes have 128 GB of DDR4 RAM.
I have two installations of Proxmox that I've been using for a while and that originally started on kernel 5.0.15-1-pve. The current configuration is as follows:
1. Boot drive (250GB) with XFS filesystem
2. Two ZFS pools as shown below...
I had originally set limits for ZFS ARC as shown below, and it was working prior to upgrading to pve 6.0.9 which includes kernel 5.0.21-3-pve.
Now, whenever my ZFS limits are configured to anything less than 2 GB (Min) and 4 GB (max) the arcstat ignores my module configuration parameters. Below is the arcstat readout after rebooting with the above 1 GB and 2 GB zfs limits despite the kernel parameters showing the correct values:
#Kernel module parameters properly reflecting settings in zfs.conf
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Below are the readouts after changing the values in my zfs.conf to the following:
#Kernel module parameters still properly reflecting settings in zfs.conf
root@pveant01:~# cat /sys/module/zfs/parameters/zfs_arc_max
4294967296
root@pveant01:~# cat /sys/module/zfs/parameters/zfs_arc_min
2147483648
My current proxmox version across 2 nodes, both nodes have 128 GB of DDR4 RAM.
root@pveant01:~# pveversion
pve-manager/6.0-9/508dcee0 (running kernel: 5.0.21-3-pve)
I have two installations of Proxmox that I've been using for a while and that originally started on kernel 5.0.15-1-pve. The current configuration is as follows:
1. Boot drive (250GB) with XFS filesystem
2. Two ZFS pools as shown below...
root@pveant01:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zndisk01 1.86T 66.0G 1.79T - - 4% 3% 1.00x ONLINE -
zsdisk01 3.72T 198G 3.53T - - 7% 5% 1.00x ONLINE -
I had originally set limits for ZFS ARC as shown below, and it was working prior to upgrading to pve 6.0.9 which includes kernel 5.0.21-3-pve.
root@pveant01:~# cat /etc/modprobe.d/zfs.conf
# Minimum ZFS ARC : 1 GB
options zfs zfs_arc_min=1073741824
# Maximum ZFS ARC : 2 GB
options zfs zfs_arc_max=2147483648
Now, whenever my ZFS limits are configured to anything less than 2 GB (Min) and 4 GB (max) the arcstat ignores my module configuration parameters. Below is the arcstat readout after rebooting with the above 1 GB and 2 GB zfs limits despite the kernel parameters showing the correct values:
root@pveant01:~# arcstat
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
08:59:50 0 0 0 0 0 0 0 0 0 44M 62G
Code:
root@pveant01:~# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 1073741824
c_max 4 67548008448
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 0
arc_meta_used 4 28487504
arc_meta_limit 4 50661006336
arc_dnode_limit 4 5066100633
arc_meta_max 4 68519832
arc_meta_min 4 16777216
async_upgrade_sync 4 261
arc_need_free 4 0
arc_sys_free 4 2110875264
arc_raw_size 4 0
Code:
root@pveant01:~# arc_summary
------------------------------------------------------------------------
ZFS Subsystem Report Mon Oct 21 09:13:41 2019
Linux 5.0.21-3-pve 0.8.2-pve1
Machine: pveant01 (x86_64) 0.8.2-pve1
ARC status: HEALTHY
Memory throttle count: 0
ARC size (current): 0.1 % 44.7 MiB
Target size (adaptive): 100.0 % 62.9 GiB
Min size (hard limit): 1.6 % 1.0 GiB
Max size (high water): 62:1 62.9 GiB
Most Frequently Used (MFU) cache size: 3.6 % 1.5 MiB
Most Recently Used (MRU) cache size: 96.4 % 40.0 MiB
Metadata cache size (hard limit): 75.0 % 47.2 GiB
Metadata cache size (current): 0.1 % 27.2 MiB
Dnode cache size (hard limit): 10.0 % 4.7 GiB
Dnode cache size (current): < 0.1 % 1.1 MiB
#Kernel module parameters properly reflecting settings in zfs.conf
root@pveant01:~# cat /sys/module/zfs/parameters/zfs_arc_max
2147483648
root@pveant01:~# cat /sys/module/zfs/parameters/zfs_arc_min
1073741824
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Below are the readouts after changing the values in my zfs.conf to the following:
root@pveant01:~# cat /etc/modprobe.d/zfs.conf
# Minimum ZFS ARC : 2 GB
options zfs zfs_arc_min=2147483648
# Maximum ZFS ARC : 4 GB
options zfs zfs_arc_max=4294967296
root@pveant01:~# arcstat
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
09:21:04 0 0 0 0 0 0 0 0 0 44M 4.0G
Code:
root@pveant01:~# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 2147483648
c_max 4 4294967296
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 0
arc_meta_used 4 28419016
arc_meta_limit 4 3221225472
arc_dnode_limit 4 322122547
arc_meta_max 4 67454320
arc_meta_min 4 16777216
async_upgrade_sync 4 248
arc_need_free 4 0
arc_sys_free 4 2110875264
arc_raw_size 4 0
Code:
root@pveant01:~# arc_summary
------------------------------------------------------------------------
ZFS Subsystem Report Mon Oct 21 09:23:05 2019
Linux 5.0.21-3-pve 0.8.2-pve1
Machine: pveant01 (x86_64) 0.8.2-pve1
ARC status: HEALTHY
Memory throttle count: 0
ARC size (current): 1.1 % 44.1 MiB
Target size (adaptive): 100.0 % 4.0 GiB
Min size (hard limit): 50.0 % 2.0 GiB
Max size (high water): 2:1 4.0 GiB
Most Frequently Used (MFU) cache size: 1.9 % 801.0 KiB
Most Recently Used (MRU) cache size: 98.1 % 40.2 MiB
Metadata cache size (hard limit): 75.0 % 3.0 GiB
Metadata cache size (current): 0.9 % 27.1 MiB
Dnode cache size (hard limit): 10.0 % 307.2 MiB
Dnode cache size (current): 0.3 % 1.1 MiB
#Kernel module parameters still properly reflecting settings in zfs.conf
root@pveant01:~# cat /sys/module/zfs/parameters/zfs_arc_max
4294967296
root@pveant01:~# cat /sys/module/zfs/parameters/zfs_arc_min
2147483648
Soo....Is there something I'm missing as to why Proxmox's ZFS subsystem is not letting me set a zfs_arc_max value less than 4 GB of total system memory??
Last edited: