l2arc and zfs volumes

chrcoluk

Renowned Member
Oct 7, 2018
175
29
68
45
Hi guys.

Like it was discovered here I found raising the volblocksize to be very productive.

https://forum.proxmox.com/threads/zfs-zvol-on-hdd-locks-up-vm.81831/

Default size is 8k, I initially worked out over a year ago used zfs dataset's was much more performant than volumes, but then after reading the above thread I found I recovered the same performance by increasing the volume block size, I also saved around 55% space, as 8k volblocksize does slightly over double space consumption.

I am now using 64k, not quite as high as 128k, and have matched the cluster size inside the guest operating system as well. (primarily large files)

But now just for clarification, as I understand it the ARC, cache's on a per record or per block basis depending on if a volume or dataset is used, is that correct?

So e.g. if you have a 50 gig volume, it doesnt get entirely invalidated in the ARC just because 1 byte got updated, but instead just that 64k block would become cold? am I correct?

My load is extremely read heavy, probably 1 write for every 50 reads or so, so I am thinking of adding an ssd as a l2arc device, for this configuration to work I will keep ram size low in the VM's to avoid wasted ram on double caching and primarycache to all (secondarycache as well).
 
DO go and read up the OpenZFS wikis and performance tuning information.

You'll need to remember that L2ARC is the 2nd level of the ARC (in RAM), and you need RAM (ie'. reduce ARC) to have the pointers to L2ARC and that L2ARC is ephemeral, ie. reboot, and it's gone and will be re-primed as you read data and the ARC overflows.

Thus, ZFS is memory "hungry", but in that, also very efficient (compared to others) in what and how it caches, so while SSDs will speedup the spinning rust (HDDs) accesses, it comes at the cost of needing RAM to reference etc.

YMMV do test but remember the above
 
Always prefer RAM over L2ARC, i.e. only consider more L2ARC if your RAM banks are completely filled.
 
Always prefer RAM over L2ARC, i.e. only consider more L2ARC if your RAM banks are completely filled.

I see many advising this, the problem is its an expensive route to take, its effectively "buy some ram, hope it works, then move to l2arc if it doesnt" it assumes buying the ram especially when it doesnt work isnt a financial issue.

I already know bumping the ram wont do the job on hit rates as its 100s of gigs of data.

However it has become somewhat of a non issue now, as it seems bumping the block size has made a tremendous difference, i/o delay has dropped to a fraction of what it was, and achievable levels of throughput have skyrocketed.

Openzfs 2.0 interestingly now supports persistent l2arc.
 
Last edited:
It's still a lot slower and still has to be mapped in RAM, so more RAM is still the optimal solution.
 
I see many advising this, the problem is its an expensive route to take, its effectively "buy some ram, hope it works, then move to l2arc if it doesnt" it assumes buying the ram especially when it doesnt work isnt a financial issue.
Cheap, fast, reliable, you can only ever have 2
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!