Hi people!
It is my second time to try use zfs. I spend a lot of time by digging google and this forum, and results is not so bad, but high io is bother me.
I try to be short. It is my testing configuration, but goal it's study.
I have 80G ECC (ARC 32G)
2xSATA (AF) 1Tb WD
nvme 120G (zil)
sata ssd 512G (cache)
.... and 500G HD for proxmox
Proxmox 5.4-3 installed by proxmox.iso (not debian).
zpool create -o ashift=12 rpool mirror /dev/sdb /dev/sdc
zpool add rpool log /dev/nvme0n1
zpool add rpool cache /dev/sdd
zfs create -V 50G -o compression=lz4 -o volblocksize=64k rpool/64k
dedup is off, checksum is on
and fio by simple sequental test:
[writetest]
blocksize=64k
filename=/dev/zvol/rpool/vm-100-disk-1
rw=write
direct=1
buffered=0
ioengine=libaio
iodepth=1
fio-2.16
Starting 1 process
Jobs: 1 (f=1): [f(1)] [100.0% done] [0KB/0KB/0KB /s] [0/0/0 iops] [eta 00m:00s]
writetest: (groupid=0, jobs=1): err= 0: pid=17204: Fri Jun 21 20:58:05 2019
write: io=102400MB, bw=211308KB/s, iops=3301, runt=496231msec
slat (usec): min=5, max=3670, avg=12.23, stdev=12.54
clat (usec): min=3, max=522139, avg=282.41, stdev=1785.75
lat (usec): min=28, max=522152, avg=296.41, stdev=1785.83
clat percentiles (usec):
| 1.00th=[ 24], 5.00th=[ 195], 10.00th=[ 203], 20.00th=[ 213],
| 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 239],
| 70.00th=[ 255], 80.00th=[ 302], 90.00th=[ 386], 95.00th=[ 402],
| 99.00th=[ 434], 99.50th=[ 454], 99.90th=[ 1896], 99.95th=[15168],
| 99.99th=[49408]
lat (usec) : 4=0.01%, 10=0.01%, 20=0.01%, 50=2.57%, 100=0.25%
lat (usec) : 250=64.98%, 500=31.91%, 750=0.08%, 1000=0.04%
lat (msec) : 2=0.09%, 4=0.01%, 10=0.02%, 20=0.01%, 50=0.04%
lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%
cpu : usr=4.00%, sys=5.41%, ctx=1648427, majf=0, minf=27
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=1638400/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=102400MB, aggrb=211308KB/s, minb=211308KB/s, maxb=211308KB/s, mint=496231msec, maxt=496231msec
I tried play with:
options zfs zfs_txg_timeout=30
options zfs zfs_vdev_sync_read_min_active=1
options zfs zfs_vdev.sync_read_max_active=1
options zfs zfs_vdev.sync_write_min_active=1
options zfs zfs_vdev.sync_write_max_active=1
options zfs zfs_vdev.async_read_min_active=1
options zfs zfs_vdev.async_read_max_active=1
options zfs zfs_vdev.async_write_min_active=1
options zfs zfs_vdev.async_write_max_active=1
options zfs zfs_vdev.scrub_min_active=1
options zfs zfs_vdev.scrub_max_active=1
and
to options zfs zfs_vdev_max_active=1
and
disable NCQ
echo 1 > /sys/block/sdc/device/queue_depth
echo 1 > /sys/block/sdb/device/queue_depth
The io wait jumped to 8%
It is normal for zfs? Why simple seq test generate so high iow?
Thanx!
It is my second time to try use zfs. I spend a lot of time by digging google and this forum, and results is not so bad, but high io is bother me.
I try to be short. It is my testing configuration, but goal it's study.
I have 80G ECC (ARC 32G)
2xSATA (AF) 1Tb WD
nvme 120G (zil)
sata ssd 512G (cache)
.... and 500G HD for proxmox
Proxmox 5.4-3 installed by proxmox.iso (not debian).
zpool create -o ashift=12 rpool mirror /dev/sdb /dev/sdc
zpool add rpool log /dev/nvme0n1
zpool add rpool cache /dev/sdd
zfs create -V 50G -o compression=lz4 -o volblocksize=64k rpool/64k
dedup is off, checksum is on
and fio by simple sequental test:
[writetest]
blocksize=64k
filename=/dev/zvol/rpool/vm-100-disk-1
rw=write
direct=1
buffered=0
ioengine=libaio
iodepth=1
fio-2.16
Starting 1 process
Jobs: 1 (f=1): [f(1)] [100.0% done] [0KB/0KB/0KB /s] [0/0/0 iops] [eta 00m:00s]
writetest: (groupid=0, jobs=1): err= 0: pid=17204: Fri Jun 21 20:58:05 2019
write: io=102400MB, bw=211308KB/s, iops=3301, runt=496231msec
slat (usec): min=5, max=3670, avg=12.23, stdev=12.54
clat (usec): min=3, max=522139, avg=282.41, stdev=1785.75
lat (usec): min=28, max=522152, avg=296.41, stdev=1785.83
clat percentiles (usec):
| 1.00th=[ 24], 5.00th=[ 195], 10.00th=[ 203], 20.00th=[ 213],
| 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 239],
| 70.00th=[ 255], 80.00th=[ 302], 90.00th=[ 386], 95.00th=[ 402],
| 99.00th=[ 434], 99.50th=[ 454], 99.90th=[ 1896], 99.95th=[15168],
| 99.99th=[49408]
lat (usec) : 4=0.01%, 10=0.01%, 20=0.01%, 50=2.57%, 100=0.25%
lat (usec) : 250=64.98%, 500=31.91%, 750=0.08%, 1000=0.04%
lat (msec) : 2=0.09%, 4=0.01%, 10=0.02%, 20=0.01%, 50=0.04%
lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%
cpu : usr=4.00%, sys=5.41%, ctx=1648427, majf=0, minf=27
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=1638400/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=102400MB, aggrb=211308KB/s, minb=211308KB/s, maxb=211308KB/s, mint=496231msec, maxt=496231msec
I tried play with:
options zfs zfs_txg_timeout=30
options zfs zfs_vdev_sync_read_min_active=1
options zfs zfs_vdev.sync_read_max_active=1
options zfs zfs_vdev.sync_write_min_active=1
options zfs zfs_vdev.sync_write_max_active=1
options zfs zfs_vdev.async_read_min_active=1
options zfs zfs_vdev.async_read_max_active=1
options zfs zfs_vdev.async_write_min_active=1
options zfs zfs_vdev.async_write_max_active=1
options zfs zfs_vdev.scrub_min_active=1
options zfs zfs_vdev.scrub_max_active=1
and
to options zfs zfs_vdev_max_active=1
and
disable NCQ
echo 1 > /sys/block/sdc/device/queue_depth
echo 1 > /sys/block/sdb/device/queue_depth
The io wait jumped to 8%
It is normal for zfs? Why simple seq test generate so high iow?
Thanx!