Is raidz just really slow? Should I give up and use mirrors? Various places quote that a raidz vdev goes at the speed of the slowest device, but the underlying IOPS aren't maxing out to what the SSDs can do...
Thanks @Dunuin appreciate your attention. I'm not using a zvol (I think?)
Also, why do I not see a smilar slow down when benchmarking on HDDs in terms or underlying IOPs?
I don't believe that's true, the ZFS recordsize parameter is just the maximum size, check zpool iostat -r [pool] to see the distribution of various block sizes. If you mean volblocksize (is the minimum?) mine doesn't show a value, is that weird? But it I guess it's 4k
# zfs get volblocksize...
HI @aaron ,
Thanks for your reply, primarycache=none on the zpool in question. I will be using containers, so I guess the filesystem dataset is relevant? The system is idle, the benchmark duration appears not to have an impact.
My concern is the per device IOPS, it appears to make sense for...
Hi all, I wonder if I could hijack with related SSD performance benchmarking - are my results within expectations? I have 2 identenical PVE 7.0-11, the only differnce being the HDD / SSD arrangement. The SSD's are enterprise SATA3 Intel S4520, the HDDs are 7.2K SAS. Full post here...
OK, so a potentially 'farier' test of the SSDs:
64 x random read/write:
fio --ioengine=libaio --filename=/rpool/fio/testx --size=4G --time_based --name=fio --group_reporting --runtime=10 --direct=1 --sync=1 --iodepth=1 --rw=randrw --bs=4K --numjobs=64
Also, I'm moved to zpool iostat for IO...
OK, that makes sense, thank you Dunuin. However, I'm sure something is wrong. Compare the following:
Running 4 tests, 2 on each machine, the only difference being the target is either the raw device or a file on the non-cached ZFS pool, /dev/sdg or /rpool/fio/testx.
fio --ioengine=sync...
Hi Dunion, thanks for your reply! The FIO command is in the OP. I'd expect the IOPS of one SSD to be in the order of 10k's for sequential read/writes, as it is on my laptop with an SSD, but on there it's nowhere near that...
Hi there!
I have two PVE 7.0 on ZFS, one with 12 x 4TB 7.2K SAS HDD in ZFS RAID 10, the other with 4 x 4TB SATA SSD in Z1 and they're coming out with near identical IO performance, which is suspicious! From benchmarking with FIO with caches and buffers disabled, on sequential read / writes, the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.