ZFS - shifting ARC to L2ARC

Jan 24, 2021
21
2
43
56
Hi,

With the shortage of DDR5 I'm needing to optimise ram for a new Proxmox ve build. I'm not going to have 32GB of ram for the ZFS ARC as required by the storage (24TB). Im not confident to use BTSFS raid1.

I'd like to exploit L2ARC to its max. I can get a 500Gb m2 ssd the same price as for 8gb of ram. thats a lot more cache for the same price (albeit slower).

* what virtual disk formats / options should I look at that are L2ARC friendly (ie write caching)?
* how do I size L2ARC?
* how small can I get ARC before I get performance issues, how small can it go before it becomes unstable?

the box will have two storage pools, ssd based for system disks, hdd + L2ARC for large data
 
You don't "format" an L2arc device, you just add it to the pool and ZFS takes care of it.

https://search.brave.com/search?q=zpool+add+l2arc+device

https://search.brave.com/search?q=h...ersation=08b32fddee760743dfb5fb8c77e708498d43

https://klarasystems.com/articles/openzfs-all-about-l2arc/

You can GPT partition the l2arc physical device to limit the partition size for the cache (you don't have to give it the whole drive.)

I would start by giving L2arc 16-32GB and monitor your arcstat / arc_summary output for about a week.

Note that you cannot mirror L2arc, they are disposable - but the data survives a reboot. You can "stack" L2arc if you need to give it more, just add another partition on the l2arc device and add it to the pool as cache.

You'd have to experiment with low sizes of l2arc, but higher sizes with limited system RAM have been known to cause instability.
 
You don't "format" an L2arc device, you just add it to the pool and ZFS takes care of it.

sorry I wasn't clear, the question was about the guest's virtual disk. https://pve.proxmox.com/wiki/Performance_Tweaks#Disk_Cache
ie are there any gotchas with guests that can impact zfs ability to cache on the host. drive encryption and the like. Its just blocks according to the host so I wouldn't think so.
You'd have to experiment with low sizes of l2arc, but higher sizes with limited system RAM have been known to cause instability.

hmm ok. I was reading about zfs causing oom
https://github.com/openzfs/zfs/issues/17920