Is the new ARC limit a waste of empty RAM for most users?

IsThisThingOn

Well-Known Member
Nov 26, 2021
269
111
48
Help me understand the new ARC limit default of 10% instead of 50%.
The only OOM scenarios, where unshrinkable TXG is a problem, I can come up with are:

A:
System with 32GB RAM and 10GBit NIC.
10% instead of 50% limits TXG to 3.2GB instead of roughly 5.8GB (limit is not ARC settings but network).
IMHO not a very realistic setup, who has 10GBit NICs but only 32GB RAM? At the same time, the difference (2.6GB) is almost negligible.

B:
System with 512GB RAM and 400GBit NIC.
10% instead of 50% limits TXG to 51GB instead of roughly 232GB (limit is not ARC settings but network).
This one I can understand.

Don't get me wrong, I get why Proxmox is very conservative with defaults.
But could it be that for 90% of users, this is just a waste of unused RAM, because they are leaving ARC performance on the table?

Or am I misunderstanding something completely?
Did we have lots of OOM issues with the old 50% default?
 
Did we have lots of OOM issues with the old 50% default?
There were a lot of threads about "where is my memory" on the forums and also some OOM, yet I can understand why the limit is reduced. 50% is AFAIK the default from the OpenZFS project and may not be taylored towards virtualization. In virtualization, you normally want to have as much memory to your VMs as possible (at least according to the many ZFS memory issue threads), so 10% makes sense here. In the end YMMV. On PBS for example, 50% is too less and I enhanced to to 90%.
 
  • Like
Reactions: Johannes S and UdoB
There were a lot of threads about "where is my memory" on the forums and also some OOM,
That one I can understand and IMHO deserves another topic. I think it would be probably better to not show ARC in the dashboard. Or at least not in that aggressive red way.

Windows Task manager does this in a good sneaky way. Cache uses (just like free space) white as background color. Only a very small line tells you that there is more RAM in use than you think. See red circle.1752140899879.png


In virtualization, you normally want to have as much memory to your VMs as possible (at least according to the many ZFS memory issue threads), so 10% makes sense here.
Don't really agree with that. Since ARC is shrinkable and the ARC limit is only a max value, not a static value.

Imagine you turn off one temp test VM. Now you just waste the freed up RAM?

Or lets say you are using ballooning. You plan for system not to crash, even if every VM uses 100% of RAM.
If even just a single VM isn't using 100% you are already wasting RAM.
 
Last edited:
So you guys have problems with ARC (not with TXG) shrinking fast enough?
Never had an issue an my 3 machines, but good to know that this might become an issue.
 
Proxmox advises a minimum based on size
That is probably a leftover of the old TrueNAS rule, that was never really that good. TrueNAS replaced that rule with "minimum 16GB" and then later on even to "minimum 8GB".

The default Proxmox limit is 10%, max 16 GB
You are right, forgot about that (new?) max value.
That wasn't implemented with the old 50% setting, right?

Makes the "waste" even worse :)
 
Last edited:
> IMHO not a very realistic setup, who has 10GBit NICs but only 32GB RAM? At the same time, the difference (2.6GB) is almost negligible

Me, for one.

Qotom "firewall appliance" with 4x 10Gbit SFP+ and 5x 2.5Gbit. The CPU is a bit slow (2.2GHz Atom 8-core) so I run the heavier stuff on a Beelink EQR6 Ryzen 9 mini-pc with 64GB RAM.

https://www.amazon.com/dp/B0CJLK9GZV?ref_=ppx_hzsearch_conn_dt_b_fed_asin_title_1
 
  • Like
Reactions: IsThisThingOn