It could be a few things.
Here are FIO benchmarks against a 12 disk SSD backplane in ZFS RAID 10. 70%/30% read/write split with 16 concurrent jobs holding an IO depth of 16 ops each.
Code:fio --filename=test --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --bsrange=4k-128k --rwmixread=70 --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=test --size=8G
Results:
Code:Run status group 0 (all jobs): READ: io=61893MB, aggrb=1031.4MB/s, minb=1031.4MB/s, maxb=1031.4MB/s, mint=60012msec, maxt=60012msec WRITE: io=26485MB, aggrb=451912KB/s, minb=451912KB/s, maxb=451912KB/s, mint=60012msec, maxt=60012msec
Summary: 1GB/s read, 452 MB/s write under extreme random IO load.
Well it does not seem all that fast to me with 12 x SSD, but depends on drives that you are using and IO delay that you are getting. Please share that with us, so we can put those results into context.
Here is the same test as yours from inside the VM with host on 10 x 7200 RPM disks + SLOG on intel dc s3500. I added clean disk and created ext4 on it. The results do not seem good to me, compared to SW MDADM RAID with LVM on top. IO WAIT is especially high comparing to MDADM (i remember from memory).
Code:
Run status group 0 (all jobs):
READ: bw=319MiB/s (335MB/s), 319MiB/s-319MiB/s (335MB/s-335MB/s), io=18.8GiB (20.1GB), run=60108-60108msec
WRITE: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=8229MiB (8628MB), run=60108-60108msec
Also, if we do some more real world testing, testing with sync / flush, by adding direct to fio:
Code:
READ: bw=225MiB/s (236MB/s), 225MiB/s-225MiB/s (236MB/s-236MB/s), io=13.2GiB (14.2GB), run=60080-60080msec
WRITE: bw=96.5MiB/s (101MB/s), 96.5MiB/s-96.5MiB/s (101MB/s-101MB/s), io=5796MiB (6077MB), run=60080-60080msec
What do you get when you add direct=1?