ARC parameter does not apply

vskushtan

New Member
Mar 12, 2021
8
0
1
29
Hi, We need some help.
We have server with:
AMD EPYC
Ram - 4 TB
SSD NVME - 93 TB
Debian 10+Proxmox 6.3-3 (5.4.78-2-pve)

By default we see:

cat /proc/spl/kstat/zfs/arcstats |grep c_

c_min 4 134887443328
c_max 4 2158199093248


When server work and ARC size = 100% we see freeze all vm, lxc, and main host.Then system drop caches during 10-15 min and we can't use the system. After drop system is alive.

arc_summary3:
ARC size (current): 99.9 % 2.0 TiB

Target size (adaptive): 100.0 % 2.0 TiB
Min size (hard limit): 6.2 % 125.6 GiB


We create /etc/modprobe.d/zfs.conf and added

options zfs zfs_arc_max=37580963840

Next we executed update-initramfs -u and saw:
update-initramfs: Generating /boot/initrd.img-5.4.78-2-pve
Running hook script 'zz-pve-efiboot'..
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.

reboot hoss

But the ARC parameter does not apply.

Next step we executed pve-efiboot-tool refresh and saw:
Running hook script 'pve-auto-removal'..
Running hook script 'zz-pve-efiboot'..
Re-executing '/etc/kernel/postinst.d/zz-pve-efiboot' in new private mount namespace..
No /etc/kernel/pve-efiboot-uuids found, skipping ESP sync.

reboot host

But the ARC parameter does not apply.

What must we do with the ARC parameter that our system hasn't freezing ?
What values must we apply for our server by our hardware parameters?
 

Attachments

  • arc_ticket.png
    arc_ticket.png
    55 KB · Views: 3

H4R0

Well-Known Member
Apr 5, 2020
616
134
48
Some version this year changed the behavior.

You have to set both min and max with different values.

Code:
cat << 'EOF' > /etc/modprobe.d/zfs.conf
options zfs zfs_arc_min=37580963839
options zfs zfs_arc_max=37580963840
EOF

update-initramfs -u
reboot
 

vskushtan

New Member
Mar 12, 2021
8
0
1
29
Some version this year changed the behavior.

You have to set both min and max with different values.

Code:
cat << 'EOF' > /etc/modprobe.d/zfs.conf
options zfs zfs_arc_min=37580963839
options zfs zfs_arc_max=37580963840
EOF

update-initramfs -u
reboot
We tried to use changes with only "options zfs zfs_arc_max" parameter on small test vm and we saw changes without "options zfs zfs_arc_min"
Does we need necessary to use "options zfs zfs_arc_min" if I want to use options zfs zfs_arc_max?
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,281
1,566
164
South Tyrol/Italy
shop.proxmox.com
Does we need necessary to use "options zfs zfs_arc_min
No. The change in behaviour was only for the case when both min and max are set to the exact same value, that doesn't work in ZFS 2.0.x anymore.
 

vskushtan

New Member
Mar 12, 2021
8
0
1
29
No. The change in behaviour was only for the case when both min and max are set to the exact same value, that doesn't work in ZFS 2.0.x anymore.
Ok, if you said that "min" value, we not necessary to use because Proxmox6 used ZFS 2.0.x , why our steps was failed, when we try to accept settings, we wrote only max value? Thanks
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
5,281
1,566
164
South Tyrol/Italy
shop.proxmox.com
What are the actual current values?
Bash:
head /sys/module/zfs/parameters/zfs_arc_max /sys/module/zfs/parameters/zfs_arc_min

You can also write to those directly to change the settings for the current boot only, e.g., for 8 GiB:
Bash:
echo "$[8 * 1024 * 1024 * 1024]" > /sys/module/zfs/parameters/zfs_arc_max
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,628
1,441
164
you need to set an explicit min value if your desired max value is lower than the default min value (which is 1/32 of total system memory).
 

vskushtan

New Member
Mar 12, 2021
8
0
1
29
What are the actual current values?
Bash:
head /sys/module/zfs/parameters/zfs_arc_max /sys/module/zfs/parameters/zfs_arc_min

You can also write to those directly to change the settings for the current boot only, e.g., for 8 GiB:
Bash:
echo "$[8 * 1024 * 1024 * 1024]" > /sys/module/zfs/parameters/zfs_arc_max
now I see my value "37580963840"
root@host:/# head /sys/module/zfs/parameters/zfs_arc_max /sys/module/zfs/parameters/zfs_arc_min
==> /sys/module/zfs/parameters/zfs_arc_max <==
37580963840
==> /sys/module/zfs/parameters/zfs_arc_min <==
0

This value "37580963840" we wrote in /etc/modprobe.d/zfs.conf
options zfs zfs_arc_max=37580963840

Also I see the default values
root@host:/# ./arc_summary3 | grep -A2 "ARC size"
ARC size (current): 66.1 % 1.3 TiB
Target size (adaptive): 100.0 % 2.0 TiB
Min size (hard limit): 6.2 % 125.6 GiB
root@host:/# cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 134887443328
c_max 4 2158199093248

When ARC size (current): will be 100% , we will have freeze on 10-15 min (
How to change ARC size or how disable freeze?
Thanks
 

vskushtan

New Member
Mar 12, 2021
8
0
1
29
you need to set an explicit min value if your desired max value is lower than the default min value (which is 1/32 of total system memory).
we have default values c_min 135 G and c_max 2TB. I want change this. We paste max = 37Gb, but this value did not apply because current value min value more than my (135GB > 37GB) . If I'll write max and min together in zfs.conf, system apply these parametrs. Do I think right?
If I have 4 TB RAM, how to choose right values for min and max ? I saw that for "max" value choose by rule 1 Gb RAM for 1 TB (hard drive)
Thanks
 
Last edited:

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,628
1,441
164
if you want to limit ARC to 37GB on that system, you need to set arc_max to 37GB, and arc_min to at most 37GB-1.
 

vskushtan

New Member
Mar 12, 2021
8
0
1
29
if you want to limit ARC to 37GB on that system, you need to set arc_max to 37GB, and arc_min to at most 37GB-1.
Can you tell me, where I can read the manual about rules for ARC values, to set right parameters for my configuration?
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,628
1,441
164

vskushtan

New Member
Mar 12, 2021
8
0
1
29
if you want to limit ARC to 37GB on that system, you need to set arc_max to 37GB, and arc_min to at most 37GB-1.
Hi, We added max and min settings in zfs.conf and changes was apply. We set min 115 and max 130. After that I saw that Cache hit ratio: 95% and Cache miss ratio:4%
When we used default 2 TB for max , I saw that hit: 99,8% and miss: 0,2%.
How to calculate values that hit ratio will be 99%?
Thanks
 

Dunuin

Famous Member
Jun 30, 2020
6,769
1,575
149
Germany
I don't think its really possible to calculate this. That totally depends on the files and the users behavior that use the server. That hit rates may also change over time.
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
7,628
1,441
164
yeah, like @Dunuin said - you can estimate, and then monitor and adjust as needed.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!