New Proxmox box w/ ZFS - R730xd w/ PERC H730 Mini in “HBA Mode” - BIG NO NO?

lol i ordered another hba330 just to rule out a bad one
However, I am not willing to pay the 100 EUR privately for the HBA. There are cheap offers from abroad, but I would have to order through my company. At least I got the H330 cheaply, which is quite similar.
I haven't had issues running the onboard
You mean the S110? Of course I could also test it, but I wouldn't want to use it voluntarily.
 
In the meantime I also tested the Samsung PM863 in conjunction with the H730. The tests with the H330 are still to come. In addition, I measured the latency of all disks again with ioping.

ioping RAW Device:
ioping -C -D -G -c 100 -WWW

ioping ZFS RAID 10:
ioping -C -D -G -c 100

Code:
# /sbin/sgdisk -n1 -t1:8300 /dev/sdb
Creating new GPT entries in memory.
The operation has completed successfully.
# /sbin/mkfs -t ext4 /dev/sdb1
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks:        0/5860750537748736/58607505                done                          
Creating filesystem with 58607505 4k blocks and 14655488 inodes
Filesystem UUID: 87d713e7-da83-461c-9112-ae5cd19047e7
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables:    0/1789         done                          
Writing inode tables:    0/1789         done                          
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information:    0/1789         done

# /sbin/blkid /dev/sdb1 -o export
Created symlink /etc/systemd/system/multi-user.target.wants/mnt-pve-ssd\x2dpm863.mount -> /etc/systemd/system/mnt-pve-ssd\x2dpm863.mount.
TASK OK

Code:
# /sbin/zpool create -o ashift=12 ssd-pm863 mirror /dev/disk/by-id/ata-SAMSUNG_MZ7LM240HCGR-00003_S35PNX0H703696 /dev/disk/by-id/ata-SAMSUNG_MZ7LM240HCGR-00003_S35PNX0H703639 mirror /dev/disk/by-id/ata-SAMSUNG_MZ7LM240HCGR-00003_S35PNX0H703706 /dev/disk/by-id/ata-SAMSUNG_MZ7LM240HCGR-00003_S35PNX0H703690
# /sbin/zfs set compression=on ssd-pm863
# systemctl enable zfs-import@ssd\x2dpm863.service
Created symlink /etc/systemd/system/zfs-import.target.wants/zfs-import@ssd\x2dpm863.service -> /lib/systemd/system/zfs-import@.service.
TASK OK

Random Write:
Code:
benchmark: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.33
Starting 1 process
benchmark: Laying out IO file (1 file / 4096MiB)

benchmark: (groupid=0, jobs=1): err= 0: pid=9168: Wed Dec 27 12:37:02 2023
  write: IOPS=24.2k, BW=94.4MiB/s (99.0MB/s)(4096MiB/43402msec); 0 zone resets
   bw (  KiB/s): min=50144, max=148384, per=99.70%, avg=96353.30, stdev=21176.04, samples=86
   iops        : min=12536, max=37096, avg=24088.33, stdev=5294.01, samples=86
  cpu          : usr=10.90%, sys=58.55%, ctx=1106406, majf=0, minf=145
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,1048576,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=94.4MiB/s (99.0MB/s), 94.4MiB/s-94.4MiB/s (99.0MB/s-99.0MB/s), io=4096MiB (4295MB), run=43402-43402msec

Disk stats (read/write):
  sdb: ios=0/1048541, merge=0/17766, ticks=0/86131, in_queue=86136, util=99.85%
Random Read:
Code:
benchmark: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.33
Starting 1 process

benchmark: (groupid=0, jobs=1): err= 0: pid=9516: Wed Dec 27 12:37:32 2023
  read: IOPS=77.2k, BW=301MiB/s (316MB/s)(4096MiB/13589msec)
   bw (  KiB/s): min=202496, max=324352, per=100.00%, avg=308735.41, stdev=29419.67, samples=27
   iops        : min=50624, max=81088, avg=77183.85, stdev=7354.92, samples=27
  cpu          : usr=16.65%, sys=56.88%, ctx=629600, majf=0, minf=290
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=1048576,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=301MiB/s (316MB/s), 301MiB/s-301MiB/s (316MB/s-316MB/s), io=4096MiB (4295MB), run=13589-13589msec

Disk stats (read/write):
  sdb: ios=1030588/206, merge=238/2085, ticks=498909/567, in_queue=499477, util=99.33%

Random Write:
Code:
benchmark: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.33
Starting 1 process
benchmark: Laying out IO file (1 file / 4096MiB)

benchmark: (groupid=0, jobs=1): err= 0: pid=10370: Wed Dec 27 12:41:04 2023
  write: IOPS=8783, BW=34.3MiB/s (36.0MB/s)(4096MiB/119380msec); 0 zone resets
   bw (  KiB/s): min=22672, max=80040, per=99.25%, avg=34869.45, stdev=7285.41, samples=238
   iops        : min= 5668, max=20010, avg=8717.34, stdev=1821.35, samples=238
  cpu          : usr=4.78%, sys=76.37%, ctx=139921, majf=0, minf=762
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=0,1048576,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: bw=34.3MiB/s (36.0MB/s), 34.3MiB/s-34.3MiB/s (36.0MB/s-36.0MB/s), io=4096MiB (4295MB), run=119380-119380msec
Random Read:
Code:
benchmark: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.33
Starting 1 process

benchmark: (groupid=0, jobs=1): err= 0: pid=11201: Wed Dec 27 12:45:17 2023
  read: IOPS=17.7k, BW=69.2MiB/s (72.5MB/s)(4096MiB/59231msec)
   bw (  KiB/s): min=42264, max=133936, per=99.36%, avg=70357.83, stdev=7973.67, samples=118
   iops        : min=10566, max=33484, avg=17589.44, stdev=1993.42, samples=118
  cpu          : usr=4.38%, sys=95.61%, ctx=158, majf=0, minf=86
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=1048576,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=69.2MiB/s (72.5MB/s), 69.2MiB/s-69.2MiB/s (72.5MB/s-72.5MB/s), io=4096MiB (4295MB), run=59231-59231msec
 
Last edited:
RAW:
Code:
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=1 time=163.6 us (warmup)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=2 time=7.73 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=3 time=121.4 us
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=4 time=3.72 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=5 time=119.7 us
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=6 time=7.57 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=7 time=123.2 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=8 time=7.58 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=9 time=141.0 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=10 time=4.31 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=11 time=124.0 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=12 time=7.94 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=13 time=123.4 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=14 time=7.02 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=15 time=126.4 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=16 time=7.52 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=17 time=124.1 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=18 time=7.75 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=19 time=132.2 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=20 time=5.13 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=21 time=126.4 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=22 time=4.67 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=23 time=122.7 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=24 time=8.29 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=25 time=125.6 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=26 time=5.01 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=27 time=126.0 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=28 time=6.65 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=29 time=145.1 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=30 time=7.47 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=31 time=126.9 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=32 time=3.57 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=33 time=124.7 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=34 time=4.75 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=35 time=120.3 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=36 time=6.70 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=37 time=126.1 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=38 time=8.66 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=39 time=132.0 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=40 time=6.51 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=41 time=123.8 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=42 time=5.50 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=43 time=124.4 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=44 time=7.42 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=45 time=122.0 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=46 time=5.63 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=47 time=120.4 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=48 time=7.16 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=49 time=130.9 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=50 time=8.82 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=51 time=126.0 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=52 time=6.49 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=53 time=147.5 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=54 time=5.38 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=55 time=119.1 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=56 time=4.38 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=57 time=123.4 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=58 time=5.71 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=59 time=128.4 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=60 time=7.00 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=61 time=124.4 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=62 time=6.23 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=63 time=137.2 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=64 time=4.34 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=65 time=120.8 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=66 time=6.58 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=67 time=135.6 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=68 time=4.13 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=69 time=126.9 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=70 time=6.91 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=71 time=124.4 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=72 time=5.75 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=73 time=123.5 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=74 time=6.16 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=75 time=119.8 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=76 time=6.78 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=77 time=120.8 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=78 time=9.30 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=79 time=121.2 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=80 time=5.86 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=81 time=122.8 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=82 time=7.11 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=83 time=122.1 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=84 time=9.61 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=85 time=122.8 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=86 time=8.16 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=87 time=121.8 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=88 time=6.91 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=89 time=128.2 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=90 time=3.99 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=91 time=122.7 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=92 time=8.16 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=93 time=120.0 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=94 time=8.00 ms (slow)
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=95 time=118.4 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=96 time=6.23 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=97 time=120.4 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=98 time=4.33 ms
4 KiB >>> /dev/sdl (block device 558.9 GiB): request=99 time=138.9 us (fast)
4 KiB <<< /dev/sdl (block device 558.9 GiB): request=100 time=5.14 ms

--- /dev/sdl (block device 558.9 GiB) ioping statistics ---
99 requests completed in 327.9 ms, 396 KiB, 301 iops, 1.18 MiB/s
generated 100 requests in 1.65 min, 400 KiB, 1 iops, 4.04 KiB/s
min/avg/max/mdev = 118.4 us / 3.31 ms / 9.61 ms / 3.33 ms
ZFS RAID 10:
Code:
4 KiB >>> /sas (zfs sas 1.05 TiB): request=1 time=18.9 us (warmup)
4 KiB <<< /sas (zfs sas 1.05 TiB): request=2 time=33.5 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=3 time=33.8 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=4 time=32.0 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=5 time=58.0 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=6 time=56.9 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=7 time=52.0 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=8 time=37.8 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=9 time=42.7 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=10 time=37.0 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=11 time=58.9 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=12 time=35.7 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=13 time=43.3 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=14 time=38.4 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=15 time=34.4 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=16 time=69.7 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=17 time=71.1 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=18 time=39.7 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=19 time=45.3 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=20 time=32.2 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=21 time=58.8 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=22 time=35.6 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=23 time=44.3 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=24 time=36.3 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=25 time=42.4 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=26 time=60.7 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=27 time=78.8 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=28 time=40.6 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=29 time=56.1 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=30 time=42.4 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=31 time=65.6 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=32 time=36.6 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=33 time=44.6 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=34 time=32.7 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=35 time=44.9 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=36 time=86.7 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=37 time=80.6 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=38 time=42.1 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=39 time=44.7 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=40 time=33.3 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=41 time=62.3 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=42 time=36.9 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=43 time=44.8 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=44 time=41.5 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=45 time=47.5 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=46 time=54.7 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=47 time=86.8 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=48 time=44.3 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=49 time=47.8 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=50 time=36.3 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=51 time=60.7 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=52 time=37.4 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=53 time=43.3 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=54 time=35.0 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=55 time=42.3 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=56 time=34.9 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=57 time=84.4 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=58 time=41.7 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=59 time=46.2 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=60 time=36.9 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=61 time=44.6 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=62 time=52.4 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=63 time=45.6 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=64 time=35.9 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=65 time=41.8 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=66 time=60.9 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=67 time=84.9 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=68 time=42.2 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=69 time=36.0 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=70 time=36.9 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=71 time=47.8 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=72 time=52.2 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=73 time=43.9 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=74 time=38.1 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=75 time=34.8 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=76 time=46.9 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=77 time=80.9 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=78 time=38.2 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=79 time=44.5 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=80 time=37.7 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=81 time=42.6 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=82 time=50.8 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=83 time=44.0 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=84 time=37.4 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=85 time=43.8 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=86 time=45.6 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=87 time=81.6 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=88 time=43.4 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=89 time=78.6 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=90 time=42.3 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=91 time=46.8 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=92 time=49.2 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=93 time=47.3 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=94 time=32.3 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=95 time=45.6 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=96 time=73.1 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=97 time=67.6 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=98 time=56.5 us
4 KiB >>> /sas (zfs sas 1.05 TiB): request=99 time=46.2 us
4 KiB <<< /sas (zfs sas 1.05 TiB): request=100 time=34.4 us

--- /sas (zfs sas 1.05 TiB) ioping statistics ---
99 requests completed in 4.78 ms, 396 KiB, 20.7 k iops, 80.9 MiB/s
generated 100 requests in 1.65 min, 400 KiB, 1 iops, 4.04 KiB/s
min/avg/max/mdev = 32.0 us / 48.3 us / 86.8 us / 14.2 us
 
RAW:
Code:
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=1 time=125.9 us (warmup)
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=2 time=248.8 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=3 time=123.9 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=4 time=233.8 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=5 time=117.5 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=6 time=194.8 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=7 time=113.4 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=8 time=226.6 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=9 time=108.3 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=10 time=192.9 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=11 time=109.9 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=12 time=225.6 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=13 time=113.0 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=14 time=199.9 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=15 time=107.1 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=16 time=223.6 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=17 time=108.6 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=18 time=225.5 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=19 time=101.6 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=20 time=197.1 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=21 time=102.4 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=22 time=223.1 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=23 time=117.0 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=24 time=198.6 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=25 time=107.2 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=26 time=224.3 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=27 time=96.9 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=28 time=199.0 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=29 time=100.4 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=30 time=221.6 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=31 time=99.7 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=32 time=225.2 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=33 time=113.1 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=34 time=197.0 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=35 time=105.8 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=36 time=226.5 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=37 time=101.2 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=38 time=195.6 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=39 time=98.9 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=40 time=196.8 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=41 time=107.1 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=42 time=230.9 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=43 time=119.9 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=44 time=198.2 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=45 time=111.4 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=46 time=199.1 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=47 time=107.2 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=48 time=227.3 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=49 time=109.6 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=50 time=197.2 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=51 time=100.1 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=52 time=203.0 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=53 time=109.7 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=54 time=193.8 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=55 time=105.7 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=56 time=200.2 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=57 time=102.7 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=58 time=197.3 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=59 time=101.0 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=60 time=224.0 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=61 time=99.5 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=62 time=196.2 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=63 time=111.2 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=64 time=228.2 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=65 time=104.2 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=66 time=227.8 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=67 time=102.6 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=68 time=201.2 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=69 time=102.0 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=70 time=197.8 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=71 time=103.8 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=72 time=229.4 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=73 time=111.7 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=74 time=226.8 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=75 time=103.8 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=76 time=198.8 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=77 time=103.9 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=78 time=227.2 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=79 time=103.9 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=80 time=224.2 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=81 time=104.3 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=82 time=201.5 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=83 time=125.2 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=84 time=200.2 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=85 time=115.8 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=86 time=199.2 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=87 time=106.8 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=88 time=226.6 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=89 time=109.3 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=90 time=196.4 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=91 time=102.0 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=92 time=200.1 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=93 time=109.9 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=94 time=227.5 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=95 time=106.2 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=96 time=227.9 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=97 time=102.8 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=98 time=199.5 us
4 KiB >>> /dev/sdn (block device 111.8 GiB): request=99 time=102.5 us
4 KiB <<< /dev/sdn (block device 111.8 GiB): request=100 time=196.4 us

--- /dev/sdn (block device 111.8 GiB) ioping statistics ---
99 requests completed in 15.8 ms, 396 KiB, 6.25 k iops, 24.4 MiB/s
generated 100 requests in 1.65 min, 400 KiB, 1 iops, 4.04 KiB/s
min/avg/max/mdev = 96.9 us / 159.9 us / 248.8 us / 53.5 us
ZFS RAID 10:
Code:
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=1 time=20.1 us (warmup)
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=2 time=33.9 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=3 time=33.4 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=4 time=32.1 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=5 time=52.7 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=6 time=56.6 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=7 time=46.4 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=8 time=34.2 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=9 time=44.9 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=10 time=37.3 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=11 time=59.0 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=12 time=36.4 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=13 time=44.7 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=14 time=55.9 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=15 time=52.2 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=16 time=57.4 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=17 time=46.7 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=18 time=32.6 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=19 time=42.9 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=20 time=33.0 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=21 time=60.1 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=22 time=35.4 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=23 time=42.9 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=24 time=64.2 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=25 time=56.6 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=26 time=56.6 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=27 time=48.4 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=28 time=34.2 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=29 time=43.6 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=30 time=36.4 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=31 time=58.8 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=32 time=34.8 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=33 time=43.0 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=34 time=44.7 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=35 time=66.5 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=36 time=58.0 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=37 time=46.0 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=38 time=54.5 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=39 time=51.3 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=40 time=35.1 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=41 time=58.8 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=42 time=37.6 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=43 time=45.6 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=44 time=59.2 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=45 time=54.7 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=46 time=54.1 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=47 time=46.3 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=48 time=36.9 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=49 time=43.0 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=50 time=36.0 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=51 time=44.3 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=52 time=52.3 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=53 time=43.4 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=54 time=61.7 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=55 time=60.0 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=56 time=38.6 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=57 time=61.6 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=58 time=36.6 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=59 time=44.4 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=60 time=33.8 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=61 time=34.4 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=62 time=51.4 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=63 time=46.5 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=64 time=36.5 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=65 time=64.6 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=66 time=40.3 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=67 time=64.5 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=68 time=36.8 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=69 time=34.8 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=70 time=35.0 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=71 time=46.0 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=72 time=51.1 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=73 time=43.4 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=74 time=62.8 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=75 time=61.7 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=76 time=39.1 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=77 time=62.2 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=78 time=36.3 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=79 time=42.3 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=80 time=35.1 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=81 time=45.7 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=82 time=52.8 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=83 time=46.3 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=84 time=64.5 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=85 time=61.8 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=86 time=37.9 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=87 time=62.1 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=88 time=36.1 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=89 time=45.0 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=90 time=33.3 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=91 time=35.0 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=92 time=36.9 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=93 time=59.5 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=94 time=43.6 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=95 time=66.1 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=96 time=38.7 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=97 time=35.3 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=98 time=72.5 us
4 KiB >>> /ssd (zfs ssd 215.1 GiB): request=99 time=48.9 us
4 KiB <<< /ssd (zfs ssd 215.1 GiB): request=100 time=36.2 us

--- /ssd (zfs ssd 215.1 GiB) ioping statistics ---
99 requests completed in 4.64 ms, 396 KiB, 21.4 k iops, 83.4 MiB/s
generated 100 requests in 1.65 min, 400 KiB, 1 iops, 4.04 KiB/s
min/avg/max/mdev = 32.1 us / 46.8 us / 72.5 us / 10.6 us
 
RAW:
Code:
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=1 time=141.8 us (warmup)
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=2 time=148.9 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=3 time=132.4 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=4 time=134.0 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=5 time=129.4 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=6 time=132.1 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=7 time=126.2 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=8 time=136.7 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=9 time=126.8 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=10 time=136.1 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=11 time=139.7 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=12 time=139.0 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=13 time=129.2 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=14 time=131.3 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=15 time=129.4 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=16 time=135.2 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=17 time=128.9 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=18 time=132.6 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=19 time=132.1 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=20 time=136.5 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=21 time=131.9 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=22 time=139.6 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=23 time=131.5 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=24 time=129.5 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=25 time=125.6 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=26 time=132.8 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=27 time=126.4 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=28 time=132.3 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=29 time=129.2 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=30 time=130.7 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=31 time=129.5 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=32 time=152.9 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=33 time=127.1 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=34 time=130.7 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=35 time=126.9 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=36 time=215.2 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=37 time=127.5 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=38 time=130.3 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=39 time=130.9 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=40 time=253.0 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=41 time=133.1 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=42 time=151.8 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=43 time=130.7 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=44 time=132.3 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=45 time=126.3 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=46 time=131.8 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=47 time=129.1 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=48 time=254.4 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=49 time=127.9 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=50 time=134.1 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=51 time=129.2 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=52 time=143.7 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=53 time=127.5 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=54 time=129.1 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=55 time=130.6 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=56 time=129.9 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=57 time=127.2 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=58 time=133.3 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=59 time=129.3 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=60 time=129.6 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=61 time=126.6 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=62 time=221.1 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=63 time=128.4 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=64 time=132.3 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=65 time=127.0 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=66 time=128.6 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=67 time=130.4 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=68 time=132.1 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=69 time=126.2 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=70 time=136.0 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=71 time=135.1 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=72 time=168.3 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=73 time=133.9 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=74 time=132.0 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=75 time=129.4 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=76 time=128.8 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=77 time=129.3 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=78 time=133.0 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=79 time=127.0 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=80 time=129.2 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=81 time=126.4 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=82 time=149.7 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=83 time=132.6 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=84 time=132.1 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=85 time=127.7 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=86 time=130.5 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=87 time=127.4 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=88 time=213.8 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=89 time=126.6 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=90 time=131.4 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=91 time=135.4 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=92 time=140.6 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=93 time=130.8 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=94 time=212.5 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=95 time=128.7 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=96 time=133.3 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=97 time=126.6 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=98 time=130.0 us
4 KiB >>> /dev/sdb (block device 223.6 GiB): request=99 time=126.3 us
4 KiB <<< /dev/sdb (block device 223.6 GiB): request=100 time=132.7 us

--- /dev/sdb (block device 223.6 GiB) ioping statistics ---
99 requests completed in 13.7 ms, 396 KiB, 7.25 k iops, 28.3 MiB/s
generated 100 requests in 1.65 min, 400 KiB, 1 iops, 4.04 KiB/s
min/avg/max/mdev = 125.6 us / 138.0 us / 254.4 us / 24.2 us
ZFS RAID 10:
Code:
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=1 time=19.0 us (warmup)
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=2 time=33.4 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=3 time=33.1 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=4 time=32.7 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=5 time=33.2 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=6 time=51.7 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=7 time=43.6 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=8 time=34.8 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=9 time=44.6 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=10 time=60.9 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=11 time=80.2 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=12 time=39.3 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=13 time=34.5 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=14 time=37.7 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=15 time=34.4 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=16 time=37.4 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=17 time=61.2 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=18 time=35.6 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=19 time=45.0 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=20 time=35.2 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=21 time=74.5 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=22 time=54.9 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=23 time=44.5 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=24 time=39.2 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=25 time=44.0 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=26 time=33.5 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=27 time=61.8 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=28 time=33.0 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=29 time=33.3 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=30 time=59.4 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=31 time=54.7 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=32 time=56.1 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=33 time=44.0 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=34 time=38.8 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=35 time=45.6 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=36 time=35.4 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=37 time=61.3 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=38 time=45.1 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=39 time=48.0 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=40 time=55.0 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=41 time=75.0 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=42 time=57.7 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=43 time=46.8 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=44 time=37.0 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=45 time=45.8 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=46 time=36.0 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=47 time=57.7 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=48 time=35.8 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=49 time=41.9 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=50 time=46.3 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=51 time=84.1 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=52 time=58.9 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=53 time=46.3 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=54 time=38.0 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=55 time=46.1 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=56 time=33.2 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=57 time=59.8 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=58 time=35.7 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=59 time=44.2 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=60 time=60.3 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=61 time=55.2 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=62 time=33.5 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=63 time=62.7 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=64 time=35.5 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=65 time=42.3 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=66 time=34.1 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=67 time=33.9 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=68 time=52.3 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=69 time=46.0 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=70 time=64.9 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=71 time=44.5 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=72 time=38.8 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=73 time=65.2 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=74 time=35.9 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=75 time=48.6 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=76 time=35.3 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=77 time=34.9 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=78 time=54.1 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=79 time=45.7 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=80 time=36.0 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=81 time=65.5 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=82 time=43.0 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=83 time=63.7 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=84 time=62.5 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=85 time=39.9 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=86 time=37.5 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=87 time=34.8 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=88 time=55.1 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=89 time=44.2 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=90 time=57.5 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=91 time=62.3 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=92 time=39.3 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=93 time=65.4 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=94 time=36.7 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=95 time=35.0 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=96 time=37.7 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=97 time=43.7 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=98 time=63.8 us
4 KiB >>> /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=99 time=48.3 us
4 KiB <<< /ssd-pm863 (zfs ssd-pm863 430.2 GiB): request=100 time=61.9 us

--- /ssd-pm863 (zfs ssd-pm863 430.2 GiB) ioping statistics ---
99 requests completed in 4.65 ms, 396 KiB, 21.3 k iops, 83.1 MiB/s
generated 100 requests in 1.65 min, 400 KiB, 1 iops, 4.04 KiB/s
min/avg/max/mdev = 32.7 us / 47.0 us / 84.1 us / 12.1 us
 
A few values are missing from the table, but I don't see them as critical. All the results are included in the appendix, so if you have any doubts you can check and analyze them yourself.

My conclusion so far is that it makes no difference whether it is an H730p in HBA mode or an H330. I would consider the existing deviations as tolerance, also because I didn't repeat it several times and then form the average.

If anyone wants another test or wants to know anything, please just post it here in the thread. If it doesn't take too much effort and time, I'd be happy to do it again.

Controller
Disk
Type
IOPS Read
IOPS Write
Bandwidth Read
MiB/s
Bandwidth Write
MiB/s
min
μs
avg
μs
max
μs
mdev
μs
H330Intel SSDSC2BB120G7RDirectory
46300​
36400​
181​
142​
100,9​
182,6​
297,9​
74,2​
H330Intel SSDSC2BB120G7RRaw
97,9​
161,7​
252,5​
56,3​
H330Intel SSDSC2BB120G7RZFS
18100​
3807​
70,9​
14,9​
32​
48,2​
96,9​
13,1​
H330Samsung PM863Directory
77200​
23900​
302​
93,5​
128,6​
163,6​
264,9​
35,9​
H330Samsung PM863Raw
128​
179,8​
408,8​
53,9​
H330Samsung PM863ZFS
20800​
5021​
81,2​
19,6​
33,7​
47,7​
84,4​
12,5​
H330Toshiba AL14SXB60ENYDirectory
993​
718​
3,88​
2,8​
131,9​
5010​
27500​
5820​
H330Toshiba AL14SXB60ENYRaw
113,2​
3150​
15100​
3340​
H330Toshiba AL14SXB60ENYZFS
17600​
6997​
68,7​
27,3​
33,9​
48​
76,1​
10,2​
H730Intel SSDSC2BB120G7RRaw
96,9​
159,9​
248,8​
53,5​
H730Intel SSDSC2BB120G7RZFS
17900​
7375​
69,9​
28,8​
32,1​
46,8​
72,5​
10,6​
H730Intel SSDSC2BB120G7RDirectory
49600​
38400​
194​
150​
H730Samsung PM863Raw
125,6​
138​
254,4​
24,2​
H730Samsung PM863ZFS
17700​
8783​
69,2​
34,3​
32,7​
47​
84,1​
12,1​
H730Samsung PM863Directory
77200​
24200​
301​
94,4​
H730Toshiba AL14SXB60ENYRaw
118,4​
3310​
9610​
3330​
H730Toshiba AL14SXB60ENYZFS
16100​
6607​
62,9​
25,8​
32​
48,3​
86,8​
14,2​
H730Toshiba AL14SXB60ENYDirectory
988​
858​
3,86​
3,35​
 

Attachments

  • h330.zip
    17.4 KB · Views: 6
  • h730p.zip
    14.5 KB · Views: 9
  • Like
Reactions: d0glesby
Hello, I'm new to Proxmox as a whole but I might be able to help here. I recently purchased a Dell PowerEdge R730 E5-2697 which came with a PERC H330, but alongside it I purchased a PERC H730. I'm still waiting for some hardware to arive (including the H730) but I intend to build a multi-role server, including a NAS. Given I want my NAS to use ZFS, I need one or the other to work well in HBA mode. Once I receive the H730, I can also benchmark the cards and see which will work better.

I do have a question for @sb-jw: did you use the Dell firmware for your tests or did you flash the H730p for HBA/IT. Based on the wording it seems you merely set it to the HBA mode which is what I intend to do. Apologies if my jargon is off, I'm a cybersecurity student with a homelab not a sysadmin.
 
did you use the Dell firmware for your tests or did you flash the H730p for HBA/IT. Based on the wording it seems you merely set it to the HBA mode which is what I intend to do.
You understood that correctly. I only used the original Dell firmware here.

The newer controllers (e.g. H740P, H745 und H745P MX) have an enhanced HBA mode, here you can e.g. create a RAID 1 for the OS via the controller while you pass the 8 other disks directly through.
 
Well it turns out that the H330 was swapped out for the H730 I paid for so I won't be able to test the H330. Given the research I've been doing these past few days, the main issues with using an H730 for Proxmox/ZFS is that:
  1. HBA mode may not be a true HBA mode and instead uses some proprietary stuff that may not do what is expected.
  2. The H730 has a 1 GB cache that could cause issues, either with the cache being overwhelmed and IOPs suffering, or with the cache trying to manage data that should be passed through.
I have seen conflicting reports on whether or not these are issues, and to what degree. As such, before I shoot myself in the foot and use ZFS with a controller that will make it difficult to recover my data, I intend to do some testing and help get to the bottom of this. These are the two questions I intend to find out.
  • Does the H730 pass proper SMART info from the drives to ZFS?
  • Does the cache cause unexpected write issues?
Obviously @sb-jw has shown that IOPs on an H730 shouldn't be an issue. Based on recent reports, so long as I update to the most recent firmware, then the H730 should report SMART info without issue. So the biggest issue that faces the H730 (in my mind) with using ZFS (whete it's explicitly stated that RAID controllers with a cache should not be) is that the cache may cause weird issues when writing data. In all honesty, I have no clue how to test this but I will figure out a way. I noticed that earlier in this thread, someone mentioned am option to disable drive caching on some PERC models. Once I update everything to the latest firmware, if I see this option I will consider the second quandary as a non-issue. Otherwise, I am very open to suggestions as the sooner I can do this, the sooner I can start building my NAS.
 
Last edited:
In updating the firmware on the H730, I found a setting in the Lifecycle Controller that allows for the cache of the H730 to be disabled. I believe the firmware updates are still going on, so I will wait to investigate this option in the BIOS, but should I get accurate SMART information (which I've heard multiple recent reports the H730 will give) then I see no reason why the H730 should have the reputation it does. It seems to simply require a bit more configuration to make it as robust for ZFS as an actual HBA, but given it's a higher-end RAID controller that shouldn't come as a surprise to anyone. Again, this is dependent on both accurate SMART information, and the cache actually being disabled, but I see no reason why either of these would fail or cause issue. I understand why the Proxmox and ZFS communities advocate so heavily against using RAID controllers, but it seems the majority of the information specific to the H730 is outdated or wrong. And as shown earlier in this thread, performance should not be an issue either. I see no reason why the H730 will not function perfectly fine with Proxmox and once I test my aforementioned assumptions, I will share my results. I expect that I will be sharing them with Proxmox installed and happily running on my server.
 
hi !
sorry to step in, but maybe i could also help, no idea if.

i want to repurpose 3 dells

- vrtx m630 (2 blades) /w shared PERC8 FW 23.14.06.0013 obviously mimicking as H730 mini embedded FW 25.5.6.0009 to each blade
- t630 /w H730P FW 25.5.9.0001
(both report driver version 7.708.07.00 in idrac)
- t620 /w H710P D1 reflashed to it-mode now reported in lspci as Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev.05)
gen3 boxes are running currently esxi6.7 production and SHOULD work as boxes for linux-anything ;-)
gen2 box runs debian-trixie for testing purposes and might serve some openzfs/nas readily i hope

since i assume vmware as further nogo i am aching my head how to resilver those nodes and will at least use them as compute-nodes with whatever ovirt/kvm hypervisor. ol9 is my current favorite.

for the t630 i have currently a premium-support-ticket open to get directions from dell regarding hcl's for different directions.
the other boxes are out of support and so what remains ist the question how to get their funny disk-controllers up and running with contemporary linux. no one gets happy to see as alternative to install e.g. oracle-linux-6 or crap those machines ...

since i read this thread with great interest, ia am curious if anyone has immediate questions to dell support regarding H730P as found in a t630.

if so, please contact me, i could forward it.


currently for me ultimate fallback for redesigning our virtualization with ovirt/kvm is to engage some commercially hw-supported central storage & recycle the dells as hypervisor-compute-nodes, but maybe there are more elegant solutions in-between. i feel sorry for all the local storage possibly wasted.

and of course, i know this is a proxmox-forum, but the key is, that ANY reasonable solution is based on modern linux-kernels and the support therein ;-)

thx f. reading in patience
w.
 
@erev

Any updates on your finding ? We are dealing with several of these and we were not affected data loss or trouble so far. (several years in production) We found out recently that one of them might have the cache still on and this might be a cause for specific performance issues. We are disabling it tonight.

If you would have a screenshot of your idrac config for your test H730P controller, this would be helpful too !

Thanks
 
@erev

Any updates on your finding ? We are dealing with several of these and we were not affected data loss or trouble so far. (several years in production) We found out recently that one of them might have the cache still on and this might be a cause for specific performance issues. We are disabling it tonight.

If you would have a screenshot of your idrac config for your test H730P controller, this would be helpful too !

Thanks
My drives are recognized with SMART info without issue, and I trust that setting "Use cache for non-RAID disks" to "Disabled" does what it says it does. As such, with various reports that the firmware has improved the HBA mode to iron out the issues, I am comfortable running the H730 with ZFS. By disabling the cache, using HBA mode, and using the most recent firmware I believe that most, if not all, of the concerns about using a RAID controller with ZFS are addressed. And as sb-jw showed, the H730 is plenty performant for the job. I'm no expert in this so I would appreciate if anyone with more ZFS experience or knowledge could raise any concerns to this setup. Once I'm back at my computer I'll see if the cache setting is in IDRAC, otherwise you may need to access it from system setup or the lifecycle manager.
 
Hey all,

Considering building a large Proxmox Backup Server on a R740 with the H730P controller, anyone have long term experience with this? or can recommend a different controller to support 12 disks?
 
thanks @alexskysilk - I am seeing a lot of mixed messages regarding this controller for a ZFS solution. I was considering getting 2x AOC-S3008L-L8E controllers instead, as many have raised concerns about the validity of its HBA mode.
 
/shrug. I actually have this model in production (HBA to ceph, but close enough.) looking at @sb-jw posted data, there is no appreciable difference between the number observed on an H330 and H730.

I'd also tell you to consider your usecase- backup is not a latency sensitive application. its probably not going to make any difference to you whether there IS a difference between a 730 and a pure HBA.
 
Yeah @alexskysilk I am more concerned with integrity as opposed to performance, in an 8-disk RAID10 It will still be better than my current NAS solution running of 4x1GB LAGG via NFS (I/O Delays on this are killing me).
Primary concern was the caching on the adaptor but considering it can be disabled.. it makes me more comfortable.
 
Hi, long time Proxmox user here and soon to be licensed Proxmox user for our company that I have now fully migrated to Proxmox. I also have a bunch of Dells that have been repurposed and also had some erratic disk behavior. I am currently unpacking higher (than other hosts) IO wait time on a Dell 530 Poweredge, but I have also recently solved a wierd disk behaviour on a r610 with a Perc H710 which I run in the "unprefferred / dangerous" config of 2 x raid 0 disks surfaced as raid z mirror (raid 1). My poor behavior was attributed to the BIOS having NUMA configured incorrectly, which manifested as wierd hanging and general slowness, including my disks. The setting in the BIOS was to disable "Node Interleaving", which might be worth a shot if you are just shotgun trying stuff. It might be worth noting that I also physically disconnected the raid controller battery, to force write through mode
 
Last edited:
@erev, after over 4 months, have you encountered any issues? I'm pretty much completing the same exact process with my R730XD. As the RAID card won't interact properly with some of my drives, I'm going to be enabling HBA (disabling cache), and installing Proxmox. Any updates would be greatly appreciated.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!