*SOLVED with Install Media 8.4* [BUG?] ZFS ARC cache not set to 10% if ZFS is created after setup

devaux

Active Member
Feb 3, 2024
186
44
28
Situation: Server has 128GB RAM. Proxmox was installed on a RAID-Controller (2 SATA-System-disks with RAID1 on EXT4). After the first reboot, i allocated 6x1.9TB NVMe-disks with ZFS RAID10 for the VMs in the Proxmox-GUI.
After this i noticed high memory usage even when not many VMs were running. After seeing lots of cached memory, i suspected ZFS to take all the memory.
And yes...:
Code:
# arcstat
    time  read  ddread  ddh%  dmread  dmh%  pread  ph%   size      c  avail
19:48:52     0       0     0       0     0      0    0    34G    39G    79G

I've read, that the max allocated RAM for ZFS ARC should be 10%, which isn't the case in this scenario.
After running the "helper-tool" (https://forum.proxmox.com/threads/proxmox-zfs-arc-cache-max-size-auto-configuration-script.151318/) it looks like expected:
Code:
# arcstat
    time  read  ddread  ddh%  dmread  dmh%  pread  ph%   size      c  avail
19:59:34     0       0     0       0     0      0    0   7.8G   7.8G   106G

Did i miss anything while creating the ZFS-storage in the GUI or is this the the normal behaviour?

EDIT: 2025-05-08: Looks like it's fixed if you use the latest 8.4 install media.
 
Last edited:
Did i miss anything while creating the ZFS-storage in the GUI or is this the the normal behaviour?
I am not sure.

But: you did not install onto ZFS. So maybe the ARC parameters were not configured at all. In that case the old default (50%) may have been active...
 
I am not sure.

But: you did not install onto ZFS. So maybe the ARC parameters were not configured at all. In that case the old default (50%) may have been active...
Yes, that's what I was thinking as well. I could reproduce this on multiple VM-hosts.
 
  • Like
Reactions: UdoB
Situation: Server has 128GB RAM. Proxmox was installed on a RAID-Controller (2 SATA-System-disks with RAID1 on EXT4). After the first reboot, i allocated 6x1.9TB NVMe-disks with ZFS RAID10 for the VMs in the Proxmox-GUI.
After this i noticed high memory usage even when not many VMs were running. After seeing lots of cached memory, i suspected ZFS to take all the memory.
And yes...:
Code:
# arcstat
    time  read  ddread  ddh%  dmread  dmh%  pread  ph%   size      c  avail
19:48:52     0       0     0       0     0      0    0    34G    39G    79G

I've read, that the max allocated RAM for ZFS ARC should be 10%, which isn't the case in this scenario.
After running the "helper-tool" (https://forum.proxmox.com/threads/proxmox-zfs-arc-cache-max-size-auto-configuration-script.151318/) it looks like expected:
Code:
# arcstat
    time  read  ddread  ddh%  dmread  dmh%  pread  ph%   size      c  avail
19:59:34     0       0     0       0     0      0    0   7.8G   7.8G   106G

Did i miss anything while creating the ZFS-storage in the GUI or is this the the normal behaviour?
Any news? Looks like a bug to me.
The subsequent creation of a ZFS pool should also set the ARC size recommended by Proxmox: 10% of RAM or max 16GB.
 
For me this doesn't look like a bug but the normal behaviour of ZFS (which uses 50% of available RAM) if you didn't change the kernel module configuration. If you install ProxmoxVE on ZFS the configuration is set to 10% of available RAM during install.
The wiki describe how to check (and change) these settings:
https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_limit_memory_usage

Basically if you don't have a file /etc/modprobe.d/zfs.conf with the ARC conficuration parameter then the default behaviour is applied (using 50% of available RAM) as long as you don't change it during runtime. Since you installed your system to a hw-RAID backed ext4 the file was not created thus 50% of available RAM was used for the cache. The config script changed the zfs.conf file thus you now have slightly smaller cache. However if you have a lot of free RAM you might as well have a larger cache since it will profit ZFS performance.
 
Last edited:
Pretty easy to achieve when you use best practice with installing Proxmox on a small controller-based RAID1 array with an EXT4 filesystem for the root partition. Then, configure an NVMe-based ZFS pool for VM storage to optimize performance in the GUI.
 
  • Like
Reactions: _gabriel
The default min for ARC isnt 50%. In addition ARC is less aggressive than the old page cache. It will shrink better, and is actually configurable. For people who like to control their cache overheads, ARC is the best out there. I also would only consider using ext4 for root, maybe if its MBR only (due to grub issues). Controller based raid is obsolete pretty much now days and ZFS has better data integrity features, which is ideal for root file systems.

I assume this issue is easy to fix though, Proxmox devs can set a default max ARC, regardless if ZFS is configured or not, there is no requirement for ZFS to be active to configure ZFS.
 
The default min for ARC isnt 50%. In addition ARC is less aggressive than the old page cache. It will shrink better, and is actually configurable. For people who like to control their cache overheads, ARC is the best out there. I also would only consider using ext4 for root, maybe if its MBR only (due to grub issues). Controller based raid is obsolete pretty much now days and ZFS has better data integrity features, which is ideal for root file systems.

I assume this issue is easy to fix though, Proxmox devs can set a default max ARC, regardless if ZFS is configured or not, there is no requirement for ZFS to be active to configure ZFS.
That's what i am talking about. In a new install scenario with only a ZFS pool, it automatically adjusts the ARC-cache size to the from Proxmox suggested 10% of RAM - but not if you add a ZFS pool after the installation.
I'm pretty sure this is not intentional and would be a hurdle for many users who are not very familiar with ZFS.
I think there would be a very easy and useful fix for everyone.