Help me understand the new ARC limit default of 10% instead of 50%.
The only OOM scenarios, where unshrinkable TXG is a problem, I can come up with are:
A:
System with 32GB RAM and 10GBit NIC.
10% instead of 50% limits TXG to 3.2GB instead of roughly 5.8GB (limit is not ARC settings but network).
IMHO not a very realistic setup, who has 10GBit NICs but only 32GB RAM? At the same time, the difference (2.6GB) is almost negligible.
B:
System with 512GB RAM and 400GBit NIC.
10% instead of 50% limits TXG to 51GB instead of roughly 232GB (limit is not ARC settings but network).
This one I can understand.
Don't get me wrong, I get why Proxmox is very conservative with defaults.
But could it be that for 90% of users, this is just a waste of unused RAM, because they are leaving ARC performance on the table?
Or am I misunderstanding something completely?
Did we have lots of OOM issues with the old 50% default?
The only OOM scenarios, where unshrinkable TXG is a problem, I can come up with are:
A:
System with 32GB RAM and 10GBit NIC.
10% instead of 50% limits TXG to 3.2GB instead of roughly 5.8GB (limit is not ARC settings but network).
IMHO not a very realistic setup, who has 10GBit NICs but only 32GB RAM? At the same time, the difference (2.6GB) is almost negligible.
B:
System with 512GB RAM and 400GBit NIC.
10% instead of 50% limits TXG to 51GB instead of roughly 232GB (limit is not ARC settings but network).
This one I can understand.
Don't get me wrong, I get why Proxmox is very conservative with defaults.
But could it be that for 90% of users, this is just a waste of unused RAM, because they are leaving ARC performance on the table?
Or am I misunderstanding something completely?
Did we have lots of OOM issues with the old 50% default?