I ran tests with the FIO utility to check performance (read/write speeds and IOPS) and got extremely poor results regarding write speeds. Especially for parameters such as random read/write
The testing methodology was as follows.
On a host system with pve installed, I measured the read write speed of the disk subsystem using the FIO utility.
I performed the measurements with the same block sizes, iodepth, that the manufacturer specified in the disk specifications, namely Random Read (4KB, QD32), (4KB, QD1) and Random Write (4KB, QD32), (4KB, QD1).
My test options
Here's the details on the server:
According to the specs on the disks, their speed Sequential Read/Write = ~560/530MB/s accordingly
Random Read/Write IOPS (4KB, QD32) = 98K/88K
Random Read/Write IOPS (4KB, QD1) = 13K/36K
specification sata ssd 870 evo 2tb
specification ssd 870 evo 250gb
In some cases (tests such as randomread/randomwrite 4kb, io depth 32) the results of random read/write gave me performance 10MB/s and about ~1000 IOPS and even less.
Performed tests by FIO
1)I would like to know why I get such low write results, since the disks are brand new and I bought them to get a profit.
2)What is the bottleneck in my specification regarding disk array bandwidth.
3)What would be the optimal Raid configuration for me ? (I don't want to lose ZFS advantages in the form of incrimental replication, snapshots,compression)?
The testing methodology was as follows.
On a host system with pve installed, I measured the read write speed of the disk subsystem using the FIO utility.
I performed the measurements with the same block sizes, iodepth, that the manufacturer specified in the disk specifications, namely Random Read (4KB, QD32), (4KB, QD1) and Random Write (4KB, QD32), (4KB, QD1).
My test options
Code:
fio --name TEST --eta-newline=15s --filename=temp.file --rw=<write|read|randwrite|randread> --size=20g --io_size=10g --blocksize=<4K|1M> --ioengine=libaio --fsync=1 --iodepth=<1|32> --direct=1 --numjobs=<1|32> --runtime=300 --group_reporting
Here's the details on the server:
Code:
1.Dell PowerEdge R620
2.Xeon(R) CPU E5-2680 v2 @ 2.80GHz (2 Sockets)
3.380 Gb memory
4.Raid PERC H310 Mini (Embedded) passthrough (non-raid)
5.2x SSD Samsung 870-evo-250gb-sata-3 Raid1 (linux raid mdadm) (Proxmox host)
6.6x SSD Samsung 870-evo-250gb-sata-3 Raid-Z2(VM hosted on here)
7.OS:Debian 11, Linux 5.15.104-1-pve #1 SMP PVE 5.15.104-2 ;Proxmox-7.4-3
8.Raid controller PERC H310 Mini (Embedded)(8-lane, PCI Express 2.0 compliant,)
Code:
Controller Properties PERC H310 Mini (Embedded)(
-Patrol Read Mode - Auto
-Manual Patrol Mode Action - Stopped
-Patrol Read Unconfigured Areas - Enabled
-Check Consistency Mode - Normal
-Copyback Mode - On
-Load Balance Mode - Auto
-Check Consistency Rate(%) - 30%
-Rebuild Rate(%) - 30%
-BGI Rate(%) - 30%
-Reconstruct Rate(%) - 30%
According to the specs on the disks, their speed Sequential Read/Write = ~560/530MB/s accordingly
Random Read/Write IOPS (4KB, QD32) = 98K/88K
Random Read/Write IOPS (4KB, QD1) = 13K/36K
specification sata ssd 870 evo 2tb
specification ssd 870 evo 250gb
In some cases (tests such as randomread/randomwrite 4kb, io depth 32) the results of random read/write gave me performance 10MB/s and about ~1000 IOPS and even less.
Performed tests by FIO
1)I would like to know why I get such low write results, since the disks are brand new and I bought them to get a profit.
2)What is the bottleneck in my specification regarding disk array bandwidth.
3)What would be the optimal Raid configuration for me ? (I don't want to lose ZFS advantages in the form of incrimental replication, snapshots,compression)?
Last edited: