Results worse like expected, more IOPS in bandwidth just because of more involved VDEVs..
root@pve01:~# fio --ioengine=psync --filename=/dev/zvol/tank1/speedtest --size=9G --time_based --name=fio --group_reporting --runtime=600 --direct=1 --sync=1 --iodepth=1 --rw=write --bs=4k --numjobs=32...
Yes, i want to use the other slot for a fast NIC.
I striped the namespaces and mirrored the drives. I use these striped VDEVs also on Truenas. ZFS loves it!
Yes, those results are done with 512b. Please see my HW setup below.
Looks like ZFS don't benefit from more namespaces :(
root@pve01:~# nvme list
Node SN Model Namespace Usage Format FW Rev
----------------...
I did it, just formated them back to 512b to show my 2nd result is close to your 4k bandwith benchmark.
Will format them back and test again and do also test them with more namespaces.
I did it with storage executive on centos, because win10 ist not supported for creating namespaces.
Hi,
here are my tests from a similar setup. My results don't differ, if i change the LBA size to 4k.
Any ideas?
Many thanks!
Michael
root@pve01:~# nvme list
Node SN Model Namespace Usage Format...
Hi aaron,
i tried this because i've got exact the same disks. The fio test on the ZVOL with option --direct=1 stop with this error:
fio-3.12
Starting 32 processes
fio: Laying out IO file (1 file / 9216MiB)
fio: looks like your file system does not support direct=1/buffered=0
fio: looks like...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.