[SOLVED] PVE 6.3-4 and ZFS 2.0 ignores zfs_arc_max

mhaluska

Well-Known Member
Sep 23, 2018
53
6
48
Hi,
my zfs_arc_max set to 64GB is ignored after update to latest version. According to v2.0 module option doc, this should work like in previous version.

Code:
# pveversion -v
proxmox-ve: 6.3-1 (running kernel: 5.4.98-1-pve)
pve-manager: 6.3-4 (running version: 6.3-4/0a38c56f)
pve-kernel-5.4: 6.3-5
pve-kernel-helper: 6.3-5
pve-kernel-5.4.98-1-pve: 5.4.98-1
pve-kernel-5.4.78-2-pve: 5.4.78-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.0-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.0.7
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.3-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.1-1
libpve-storage-perl: 6.3-7
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.0.8-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.4-5
pve-cluster: 6.2-1
pve-container: 3.3-4
pve-docs: 6.3-1
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.2-2
pve-ha-manager: 3.1-1
pve-i18n: 2.2-2
pve-qemu-kvm: 5.2.0-2
pve-xtermjs: 4.7.0-3
qemu-server: 6.3-5
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.3-pve1

Code:
# cat /etc/modprobe.d/zfs.conf
options zfs zfs_arc_min=68719476736
options zfs zfs_arc_max=68719476736

Code:
# arc_summary

------------------------------------------------------------------------
ZFS Subsystem Report                            Sat Feb 27 13:52:06 2021
Linux 5.4.98-1-pve                                            2.0.3-pve1
Machine: pve1 (x86_64)                                        2.0.3-pve1

ARC status:                                                      HEALTHY
        Memory throttle count:                                         0

ARC size (current):                                    36.2 %   91.1 GiB
        Target size (adaptive):                        36.2 %   91.1 GiB
        Min size (hard limit):                         25.4 %   64.0 GiB
        Max size (high water):                            3:1  251.9 GiB
        Most Frequently Used (MFU) cache size:         10.0 %    8.7 GiB
        Most Recently Used (MRU) cache size:           90.0 %   78.7 GiB
        Metadata cache size (hard limit):              75.0 %  188.9 GiB
        Metadata cache size (current):                  3.1 %    5.8 GiB
        Dnode cache size (hard limit):                 10.0 %   18.9 GiB
        Dnode cache size (current):                     4.4 %  860.9 MiB
...
...

Code:
# arcstat
    time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  size     c  avail
13:52:48     0     0      0     0    0     0    0     0    0   91G   91G   268G

Code:
# free -g
              total        used        free      shared  buff/cache   available
Mem:            503         204          19           0         280         295
Swap:             0           0           0
 
Hi!

Yeah, the parameter is still the same. Was it applied by modprobe?
Bash:
cat /sys/module/zfs/parameters/zfs_arc_max

Or can you apply it manually there?
Bash:
echo "68719476736" >/sys/module/zfs/parameters/zfs_arc_max
 
Hey,

Bash:
# cat /sys/module/zfs/parameters/zfs_arc_max
68719476736

Seems it's totally ignored:
Bash:
# echo "68719476736" >/sys/module/zfs/parameters/zfs_arc_max
# cat /sys/module/zfs/parameters/zfs_arc_max
68719476736
# arcstat
    time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  size     c  avail
14:20:04     0     0      0     0    0     0    0     0    0   90G   91G   268G
# arc_summary

------------------------------------------------------------------------
ZFS Subsystem Report                            Sat Feb 27 14:21:05 2021
Linux 5.4.98-1-pve                                            2.0.3-pve1
Machine: pve1 (x86_64)                                        2.0.3-pve1

ARC status:                                                      HEALTHY
        Memory throttle count:                                         0

ARC size (current):                                    36.0 %   90.7 GiB
        Target size (adaptive):                        36.3 %   91.5 GiB
        Min size (hard limit):                         25.4 %   64.0 GiB
        Max size (high water):                            3:1  251.9 GiB
        Most Frequently Used (MFU) cache size:          9.9 %    8.6 GiB
        Most Recently Used (MRU) cache size:           90.1 %   78.5 GiB
        Metadata cache size (hard limit):              75.0 %  188.9 GiB
        Metadata cache size (current):                  3.1 %    5.8 GiB
        Dnode cache size (hard limit):                 10.0 %   18.9 GiB
        Dnode cache size (current):                     4.5 %  862.3 MiB
...
...
 
If you change the zfs_arc_max also make sure the zfs_arc_min is less than the zfs_arc_max, otherwise it uses the zfs_arc_min as zfs_arc_max. This might explain why it does ignore you setting the zfs_arc_max on the command line (in your last Bash example).
If it ignores your settings in /etc/modprobe.d/zfs.conf, then something went wrong while you ran update-initfamfs before rebooting. Can you run update-initfamfs -u again (with both zfs_arc_max and zfs_arc_min in /etc/modprobe.d/zfs.conf)?
 
I always used zfs_arc_min = zfs_arc_max and this was working, I didn't changed this configuration. Command update-initramfs has been executed during update with same ARC settings used for past 1 year. Also due to new ZFS version I executed update-initramfs manually before reboot, just to be sure everything will be fine.
 
Good catch, actually overlooked that. Can you try without ARC min config? (I mean it may have worked before and now changed in semantics, which could be seen as bug, so lets better check that)
 
Interesting... Thanks @avw, seems with current version min and max cannot be same value:

Bash:
# cat /sys/module/zfs/parameters/zfs_arc_min /sys/module/zfs/parameters/zfs_arc_max
68719476736
68719476737

Bash:
# arc_summary

------------------------------------------------------------------------
ZFS Subsystem Report                            Sat Feb 27 15:58:47 2021
Linux 5.4.98-1-pve                                            2.0.3-pve1
Machine: pve1 (x86_64)                                        2.0.3-pve1

ARC status:                                                      HEALTHY
        Memory throttle count:                                         0

ARC size (current):                                    99.5 %   63.7 GiB
        Target size (adaptive):                       100.0 %   64.0 GiB
        Min size (hard limit):                        100.0 %   64.0 GiB
        Max size (high water):                            1:1   64.0 GiB
        Most Frequently Used (MFU) cache size:         13.7 %    8.3 GiB
        Most Recently Used (MRU) cache size:           86.3 %   52.2 GiB
        Metadata cache size (hard limit):              75.0 %   48.0 GiB
        Metadata cache size (current):                 10.6 %    5.1 GiB
        Dnode cache size (hard limit):                 10.0 %    4.8 GiB
        Dnode cache size (current):                    17.6 %  864.6 MiB

I din't find info about this in doc.
 
I've got the same issue. Also running update-initramfs -u did not fix it.
The zfs filesystem was not updated to version 2.0.3, it is still at 0.8

I am very surpised that such a major update is done without notice and as a minor version update.

Edit: The thing is just, once you upgraded to 2.0.x there is no way back to 0.8. zfs-utils, kernel modules, etc at 0.8 do not work with 2.0. One has to create new rescue USB Sticks, and so on. This update has huge implications.
 
Last edited:
I've got the same issue. Also running update-initramfs -u did not fix it.
The zfs filesystem was not updated to version 2.0.3, it is still at 0.8

I am very surpised that such a major update is done without notice and as a minor version update.

Edit: The thing is just, once you upgraded to 2.0.x there is no way back to 0.8. zfs-utils, kernel modules, etc at 0.8 do not work with 2.0. One has to create new rescue USB Sticks, and so on. This update has huge implications.
If you just install the packages and don’t follow that up with a manual CLI zfs upgrade of your existing zpools you should still be able to use older kernel and kernel modules made for zfs 0.8. That is my understanding from post here by proxmox devs.
 
If you just install the packages and don’t follow that up with a manual CLI zfs upgrade of your existing zpools you should still be able to use older kernel and kernel modules made for zfs 0.8. That is my understanding from post here by proxmox devs.
That's correct, and that's what I am doing right now.

However I wonder if the zfs_arch_max issue is related to the old version of the zfs pool.
 
I did a reboot and think it is working now:
Code:
ARC size (current):                                    14.9 %    3.3 GiB
        Target size (adaptive):                        17.6 %    3.9 GiB
        Min size (hard limit):                         17.6 %    3.9 GiB
        Max size (high water):                            5:1   22.4 GiB
        Most Frequently Used (MFU) cache size:         15.3 %  501.1 MiB
        Most Recently Used (MRU) cache size:           84.7 %    2.7 GiB
        Metadata cache size (hard limit):              75.0 %   16.8 GiB
        Metadata cache size (current):                  1.7 %  298.6 MiB
        Dnode cache size (hard limit):                 10.0 %    1.7 GiB
        Dnode cache size (current):                     0.7 %   11.7 MiB
 
(with both zfs_arc_max and zfs_arc_min in /etc/modprobe.d/zfs.conf)?
Hi, I have a similar problem as listed in this post, my cluster eats up all memory. I don't find how you specified in /etc/modprobe.d/zfs.conf. I have the same version of proxmox. Do I need to create the zfs.conf file or should I find it exactly there? Thanks for your answer .
 
Hi, I have a similar problem as listed in this post, my cluster eats up all memory. I don't find how you specified in /etc/modprobe.d/zfs.conf. I have the same version of proxmox. Do I need to create the zfs.conf file or should I find it exactly there? Thanks for your answer .
Hi, just add min size < max size, example:
Bash:
# cat /etc/modprobe.d/zfs.conf
options zfs zfs_arc_min=68719476735 # This is 64GB - 1B
options zfs zfs_arc_max=68719476736 # This is 64GB
 
Hi, just add min size < max size, example:
Bash:
# cat /etc/modprobe.d/zfs.conf
options zfs zfs_arc_min=68719476735 # This is 64GB - 1B
options zfs zfs_arc_max=68719476736 # This is 64GB
Ok thanks, I see that you should enter the values, but the weird thing about my cluster is that the zfs.conf files does not exist in /etc/modprobe.d/
 
Ok thanks, I see that you should enter the values, but the weird thing about my cluster is that the zfs.conf files does not exist in /etc/modprobe.d/
Correct, this file is not there by default. But you can create this file. And also change this dynamically without reboot using:
Bash:
echo 68719476735 > /sys/module/zfs/parameters/zfs_arc_min # For ARC min value
echo 68719476736 > /sys/module/zfs/parameters/zfs_arc_max # For ARC max value
 
  • Like
Reactions: frankz
Correct, this file is not there by default. But you can create this file. And also change this dynamically without reboot using:
Bash:
echo 68719476735 > /sys/module/zfs/parameters/zfs_arc_min # For ARC min value
echo 68719476736 > /sys/module/zfs/parameters/zfs_arc_max # For ARC max value
Thank you very much, you have clarified my doubt. I will try and as a post I should be able to set the parameters I want. In addition, I have to perform initram update - u or not?
 
Thank you very much, you have clarified my doubt. I will try and as a post I should be able to set the parameters I want. In addition, I have to perform initram update - u or not?
I'm not 100% sure, but I think you should do so. Anyway this will cause any harm... ;)
 
I can confirm that setting zfs_arc_min equal to zfs_arc_max breaks old behavior and reset upper limit to default half of RAM

Very painful - this was my default setup for years(
 
If you change the zfs_arc_max also make sure the zfs_arc_min is less than the zfs_arc_max, otherwise it uses the zfs_arc_min as zfs_arc_max. This might explain why it does ignore you setting the zfs_arc_max on the command line (in your last Bash example).

It's not sctually correct!
If you set zfs_arc_min to zfs_arc_max it does not use zfs_arc_min as zfs_arc_max!

It sets zfs_arc_min to desired value and ignores value for zfs_arc_max (so it's kept as default - half of RAM)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!