Hi there!
I have two PVE 7.0 on ZFS, one with 12 x 4TB 7.2K SAS HDD in ZFS RAID 10, the other with 4 x 4TB SATA SSD in Z1 and they're coming out with near identical IO performance, which is suspicious! From benchmarking with FIO with caches and buffers disabled, on sequential read / writes, the SSD it looks like the SSDs are seriously underperforming. They are enterprise SATA's (intel S4520). The performance kind of makes sense on the HDDs but not the SSDs!
On both :
SSD results:
HDD results:
I believe the important numbers are:
SSD:
read: IOPS=1469
write: IOPS=1467
HDD
read: IOPS=1449
write: IOPS=483
Eg, I get a 10k on read and write on my laptop with the same fio test!
Any advice appreciated.
Simon
PS I've just seen :
https://forum.proxmox.com/threads/bad-zfs-performance-with-sas3416-hba.96260/
which says to add log and cache devices... is the above really expected without seperate SLOG and L2ARC devices?!
PPS I added log and cache with no change!
I have two PVE 7.0 on ZFS, one with 12 x 4TB 7.2K SAS HDD in ZFS RAID 10, the other with 4 x 4TB SATA SSD in Z1 and they're coming out with near identical IO performance, which is suspicious! From benchmarking with FIO with caches and buffers disabled, on sequential read / writes, the SSD it looks like the SSDs are seriously underperforming. They are enterprise SATA's (intel S4520). The performance kind of makes sense on the HDDs but not the SSDs!
On both :
Code:
zfs create rpool/fio
zfs set primarycache=none rpool/fio
fio --ioengine=sync --direct=1 --gtod_reduce=1 --name=test --filename=/rpool/fio/test --bs=4k --iodepth=1 --size=4G --readwrite=readwrite --rwmixread=50
SSD results:
Code:
test: (groupid=0, jobs=1): err= 0: pid=3778230: Thu Nov 18 17:14:32 2021
read: IOPS=1469, BW=5878KiB/s (6019kB/s)(1468MiB/255757msec)
bw ( KiB/s): min= 2328, max=66600, per=100.00%, avg=5882.09, stdev=8835.21, samples=511
iops : min= 582, max=16650, avg=1470.51, stdev=2208.83, samples=511
write: IOPS=1467, BW=5868KiB/s (6009kB/s)(1466MiB/255757msec); 0 zone resets
bw ( KiB/s): min= 2184, max=66840, per=100.00%, avg=5872.33, stdev=8840.91, samples=511
iops : min= 546, max=16710, avg=1468.07, stdev=2210.23, samples=511
cpu : usr=0.83%, sys=13.86%, ctx=270574, majf=0, minf=53
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=375824,375217,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=5878KiB/s (6019kB/s), 5878KiB/s-5878KiB/s (6019kB/s-6019kB/s), io=1468MiB (1539MB), run=255757-255757msec
WRITE: bw=5868KiB/s (6009kB/s), 5868KiB/s-5868KiB/s (6009kB/s-6009kB/s), io=1466MiB (1537MB), run=255757-255757msec
HDD results:
Code:
test: (groupid=0, jobs=1): err= 0: pid=3762085: Thu Nov 18 17:07:35 2021
read: IOPS=1449, BW=5797KiB/s (5936kB/s)(363MiB/64101msec)
bw ( KiB/s): min= 4040, max= 9960, per=100.00%, avg=5800.88, stdev=1603.29, samples=128
iops : min= 1010, max= 2490, avg=1450.22, stdev=400.82, samples=128
write: IOPS=483, BW=1933KiB/s (1980kB/s)(121MiB/64101msec); 0 zone resets
bw ( KiB/s): min= 1328, max= 3248, per=100.00%, avg=1934.87, stdev=536.80, samples=128
iops : min= 332, max= 812, avg=483.72, stdev=134.20, samples=128
cpu : usr=0.63%, sys=16.79%, ctx=156895, majf=0, minf=7
IO depths : 1=103.1%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=92896,30979,0,3815 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=5797KiB/s (5936kB/s), 5797KiB/s-5797KiB/s (5936kB/s-5936kB/s), io=363MiB (381MB), run=64101-64101msec
WRITE: bw=1933KiB/s (1980kB/s), 1933KiB/s-1933KiB/s (1980kB/s-1980kB/s), io=121MiB (127MB), run=64101-64101msec
I believe the important numbers are:
SSD:
read: IOPS=1469
write: IOPS=1467
HDD
read: IOPS=1449
write: IOPS=483
Eg, I get a 10k on read and write on my laptop with the same fio test!
Any advice appreciated.
Simon
PS I've just seen :
https://forum.proxmox.com/threads/bad-zfs-performance-with-sas3416-hba.96260/
which says to add log and cache devices... is the above really expected without seperate SLOG and L2ARC devices?!
PPS I added log and cache with no change!
Last edited: