Poor performance on ZFS compared to LVM (same hardware)

So, all my tests concluded is much more better use 4k volblocksize, is it safe or I could have another problems?
 
Random write is a lot faster with 4k blocksize but random read is slower than before with 8k... :(
 
I haved created a new zvol with volblocksize=4k, asigned to the VM, cloned CentOS with dd to the new disk, and rebooted the VM with the new disk.

The performance increased dramatically! Now I get expected IOPS.


So, I was able to guide you to the corect solution ... lucky you ;) And thx.
 
you can set it in the storage config


Yes, this is true, but is not so good. The best way, is to have the posibility to chose the default value(with a check box) for the zfs pool (as is in datastore) or to write your own value(as 4k multiply) when you create a new VM.
For this reason I use zfs command line, it is faster to finish the same task compared with http interface.
 
Sorry to resurrect an old thread, but I am experiencing the very same behavior nowadays with all the latest ZFS 0.8 and PVE 6.1.

Described here. Does anybody have a clue?
 
Yes for me too. I installed proxmox 6.3 and tried a single zfs nvme drive (970evo plus) and the results were much much slower than an LVM ssd drive (860evo). Why is that? how can we fix performance with zfs?
 
Yes for me too. I installed proxmox 6.3 and tried a single zfs nvme drive (970evo plus) and the results were much much slower than an LVM ssd drive (860evo). Why is that? how can we fix performance with zfs?

Hi,

Can you define/explain with some numbers what do you get as "much much slower"(zfs versus LVM) ?

Good luck / Bafta !