Hi guys.
Like it was discovered here I found raising the volblocksize to be very productive.
https://forum.proxmox.com/threads/zfs-zvol-on-hdd-locks-up-vm.81831/
Default size is 8k, I initially worked out over a year ago used zfs dataset's was much more performant than volumes, but then after reading the above thread I found I recovered the same performance by increasing the volume block size, I also saved around 55% space, as 8k volblocksize does slightly over double space consumption.
I am now using 64k, not quite as high as 128k, and have matched the cluster size inside the guest operating system as well. (primarily large files)
But now just for clarification, as I understand it the ARC, cache's on a per record or per block basis depending on if a volume or dataset is used, is that correct?
So e.g. if you have a 50 gig volume, it doesnt get entirely invalidated in the ARC just because 1 byte got updated, but instead just that 64k block would become cold? am I correct?
My load is extremely read heavy, probably 1 write for every 50 reads or so, so I am thinking of adding an ssd as a l2arc device, for this configuration to work I will keep ram size low in the VM's to avoid wasted ram on double caching and primarycache to all (secondarycache as well).
Like it was discovered here I found raising the volblocksize to be very productive.
https://forum.proxmox.com/threads/zfs-zvol-on-hdd-locks-up-vm.81831/
Default size is 8k, I initially worked out over a year ago used zfs dataset's was much more performant than volumes, but then after reading the above thread I found I recovered the same performance by increasing the volume block size, I also saved around 55% space, as 8k volblocksize does slightly over double space consumption.
I am now using 64k, not quite as high as 128k, and have matched the cluster size inside the guest operating system as well. (primarily large files)
But now just for clarification, as I understand it the ARC, cache's on a per record or per block basis depending on if a volume or dataset is used, is that correct?
So e.g. if you have a 50 gig volume, it doesnt get entirely invalidated in the ARC just because 1 byte got updated, but instead just that 64k block would become cold? am I correct?
My load is extremely read heavy, probably 1 write for every 50 reads or so, so I am thinking of adding an ssd as a l2arc device, for this configuration to work I will keep ram size low in the VM's to avoid wasted ram on double caching and primarycache to all (secondarycache as well).