Aggregated disk speed with KVMs on ZVols < 100MB/s

mailinglists

Renowned Member
Mar 14, 2012
641
69
93
Hi guys,

I have PM 4.2 up2date with ZFS mirror on two SSD disks.
Disk read speed is acceptable everywhere.
Disk write speed is acceptable on hypervisor and well as in LXC containers:

In PM:
dd if=/dev/zero of=brisi bs=10M count=200 oflag=dsync
200+0 records in
200+0 records out
2097152000 bytes (2.1 GB) copied, 3.77828 s, 555 MB/s

in LXC:
dd if=/dev/zero of=brisi bs=10M count=100 oflag=dsync
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB) copied, 2.04414 s, 513 MB/s

However in KVM CentOS 7 installed from minimal ISO, i can never go above 99 MB/s:
dd if=/dev/zero of=brisi bs=10M count=100 oflag=dsync
100+0 zapisov na vhodu
100+0 zapisov na izhodu
1048576000 bajtov (1,0 GB) prepisanih, 11,2772 s, 93,0 MB/s

I tried every cache mode (OK, dangerous mode gives me up to 200MB/s but it's not an option really).
I tried every disk mode IDE, SATA, VIRTIO, and got about the same results.

What could I try in order to increase write speed of VMs on ZVOLs?
What sequential write speeds do you guys get with KVM instances on ZVOLs?


The worst thing is that concurrent writes to multiple ZVOLs by multiple instances just share this 100MB/s limit. So if 20 KVM VMS would write to disk at the same time, each would get just 5MB/s!
 
Never, ever test ZFS with writing zeros and also never test disk throughput with dd. Both are terrible ideas: Please use fio for every read/write test of any kind. The reason is very simple:

Code:
root@proxmox4 ~ > dd if=/dev/zero bs=1M of=/ZERO count=16384
16384+0 records in
16384+0 records out
17179869184 bytes (17 GB) copied, 12.8724 s, 1.3 GB/s
root@proxmox4 ~ > du /ZERO
1       /ZERO

You actually write almost nothing.

The write speed of your system depends on your SSDs. I suppose they are not enterprise grade SSDs?
 
Thank you for your answer LnxBil.

Will do tests with fio, without zeros :-) and repost.

I see you tested RAM speed with dd in your example, because you did not flush to disks. I however, did test sequential disk write speed with dd.

Hopefully you will be able to help me after i post fio results, as you requested.

Disks are Samsung PRO SSDs.
 
I see you tested RAM speed with dd in your example, because you did not flush to disks. I however, did test sequential disk write speed with dd.

The point was that there is actually nothing written to disk, so the dd-test is not a write throughput test:

Code:
root@proxmox4 ~ > zfs create -o quota=1G rpool/dd-nonsense-test

root@proxmox4 /rpool/dd-nonsense-test > df -h .
Filesystem              Size  Used Avail Use% Mounted on
rpool/dd-nonsense-test  1.0G  128K  1.0G   1% /rpool/dd-nonsense-test

root@proxmox4 /rpool/dd-nonsense-test > dd if=/dev/zero bs=1M of=ZERO count=4096 oflag=dsync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 6.55939 s, 655 MB/s

root@proxmox4 /rpool/dd-nonsense-test > ls -l
total 1
-rw-r--r-- 1 root root 4294967296 Sep 23 11:06 ZERO

root@proxmox4 /rpool/dd-nonsense-test > df -h .
Filesystem              Size  Used Avail Use% Mounted on
rpool/dd-nonsense-test  1.0G  128K  1.0G   1% /rpool/dd-nonsense-test

Disks are Samsung PRO SSDs.

So, Prosumer and not enterprise SSDs. What exactly 850 (on a 950 you get more speed, it's NVMe)?
 
I understand. Thank you for explanation. I redid the the sequential write tests in KVM, LXC and Hypervizor, all on the same Samsung 850 Pro disks with SATA in ZFS RAID 1.

It took me a while to find the appropriate fio switches. I finally decided on:
fio --filename=brisi --sync=1 --rw=write --bs=10M --numjobs=1 --iodepth=1 --size=3000MB --name=test

Here are the results. They are relative to my DD tests, even thou the speed is a bit lower now. LXC and PM are at 219MB/s while KVM is at 69MB/s to 92MB/s! Still below 100MB/s.

PM/Hypervisor: 219MB/s
Code:
root@pp:~# fio --filename=brisi --sync=1 --rw=write --bs=10M --numjobs=1 --iodepth=1 --size=3000MB --name=test
test: (g=0): rw=write, bs=10M-10M/10M-10M/10M-10M, ioengine=sync, iodepth=1
fio-2.1.11
Starting 1 process
Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/240.0MB/0KB /s] [0/24/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=41207: Fri Sep 23 12:28:22 2016
  write: io=3000.0MB, bw=219178KB/s, iops=21, runt= 14016msec
    clat (msec): min=36, max=66, avg=45.46, stdev= 6.48
    lat (msec): min=37, max=67, avg=46.70, stdev= 6.46
    clat percentiles (usec):
    |  1.00th=[37120],  5.00th=[38144], 10.00th=[38656], 20.00th=[40192],
    | 30.00th=[41216], 40.00th=[42240], 50.00th=[43264], 60.00th=[45312],
    | 70.00th=[47360], 80.00th=[50944], 90.00th=[55552], 95.00th=[59648],
    | 99.00th=[64768], 99.50th=[66048], 99.90th=[67072], 99.95th=[67072],
    | 99.99th=[67072]
    bw (KB  /s): min=166054, max=248358, per=99.98%, avg=219131.73, stdev=25084.18
    lat (msec) : 50=79.00%, 100=21.00%
  cpu          : usr=2.68%, sys=21.89%, ctx=790, majf=0, minf=5147
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued    : total=r=0/w=300/d=0, short=r=0/w=0/d=0
    latency   : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
  WRITE: io=3000.0MB, aggrb=219178KB/s, minb=219178KB/s, maxb=219178KB/s, mint=14016msec, maxt=14016msec

LXC: 218MB/s
Code:
test: (g=0): rw=write, bs=10M-10M/10M-10M/10M-10M, ioengine=sync, iodepth=1
fio-2.2.10
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 3000MB)
Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/240.0MB/0KB /s] [0/24/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=603: Fri Sep 23 10:31:38 2016
  write: io=3000.0MB, bw=218617KB/s, iops=21, runt= 14052msec
    clat (msec): min=37, max=68, avg=45.70, stdev= 6.55
    lat (msec): min=37, max=70, avg=46.82, stdev= 6.57
    clat percentiles (usec):
    |  1.00th=[37632],  5.00th=[38144], 10.00th=[38656], 20.00th=[39680],
    | 30.00th=[41728], 40.00th=[42752], 50.00th=[44288], 60.00th=[45824],
    | 70.00th=[47872], 80.00th=[50432], 90.00th=[55552], 95.00th=[59136],
    | 99.00th=[64256], 99.50th=[66048], 99.90th=[69120], 99.95th=[69120],
    | 99.99th=[69120]
    bw (KB  /s): min=162862, max=244780, per=99.78%, avg=218137.00, stdev=25211.40
    lat (msec) : 50=77.67%, 100=22.33%
  cpu          : usr=2.45%, sys=21.55%, ctx=1065, majf=8, minf=5153
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued    : total=r=0/w=300/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency   : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
  WRITE: io=3000.0MB, aggrb=218616KB/s, minb=218616KB/s, maxb=218616KB/s, mint=14052msec, maxt=14052msec

KVM write-back-safe: only 69MB/s
Code:
test: (g=0): rw=write, bs=10M-10M/10M-10M/10M-10M, ioengine=sync, iodepth=1
fio-2.2.10
Starting 1 process
Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/71536KB/0KB /s] [0/6/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=1071: Fri Sep 23 13:35:01 2016
  write: io=3000.0MB, bw=69534KB/s, iops=6, runt= 44180msec
    clat (msec): min=127, max=186, avg=145.93, stdev= 9.46
    lat (msec): min=128, max=187, avg=147.24, stdev= 9.50
    clat percentiles (msec):
    |  1.00th=[  129],  5.00th=[  133], 10.00th=[  133], 20.00th=[  137],
    | 30.00th=[  141], 40.00th=[  145], 50.00th=[  147], 60.00th=[  149],
    | 70.00th=[  151], 80.00th=[  153], 90.00th=[  159], 95.00th=[  161],
    | 99.00th=[  174], 99.50th=[  176], 99.90th=[  188], 99.95th=[  188],
    | 99.99th=[  188]
    bw (KB  /s): min=65119, max=74068, per=100.00%, avg=69651.92, stdev=1921.22
    lat (msec) : 250=100.00%
  cpu          : usr=0.93%, sys=17.76%, ctx=1360, majf=0, minf=29
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued    : total=r=0/w=300/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency   : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
  WRITE: io=3000.0MB, aggrb=69533KB/s, minb=69533KB/s, maxb=69533KB/s, mint=44180msec, maxt=44180msec
Disk stats (read/write):
    dm-0: ios=0/6588, merge=0/0, ticks=0/90516, in_queue=90516, util=81.28%, aggrios=0/6910, aggrmerge=0/0, aggrticks=0/90811, aggrin_queue=90793, aggrutil=81.31%
  vda: ios=0/6910, merge=0/0, ticks=0/90811, in_queue=90793, util=81.31%

KVM default, no cache: 88MB/s

KVM direct sync: 92MB/s

KVM write trough: 70MB/s
 
I normally don't care about MB/sec, I'm only interested in IOPS.

The fio package includes various test scenarios and I suggest you try the iometer-file-access-server.fio but be aware, it is destructive. You have to change it to use a file instead of a raw device if you want to keep your data.

Here a my result on:

Code:
root@proxmox4 ~ > zfs create -V $(( 256 * 1073741824 )) -o volblocksize=4K rpool/testdisk

root@proxmox4 ~ > fio --filename=/dev/zvol/rpool/testdisk /usr/share/doc/fio/examples/iometer-file-access-server.fio
iometer: (g=0): rw=randrw, bs=512-64K/512-64K/512-64K, ioengine=libaio, iodepth=64
fio-2.1.11
Starting 1 process
Jobs: 1 (f=1): [m(1)] [100.0% done] [126.7MB/33393KB/0KB /s] [29.1K/7488/0 iops] [eta 00m:00s]
iometer: (groupid=0, jobs=1): err= 0: pid=30434: Fri Sep 23 15:00:11 2016
  Description  : [Emulation of Intel IOmeter File Server Access Pattern]
  read : io=3274.5MB, bw=148906KB/s, iops=24435, runt= 22515msec
    slat (usec): min=8, max=678, avg=25.22, stdev=20.30
    clat (usec): min=3, max=6785, avg=2061.15, stdev=476.11
     lat (usec): min=28, max=6831, avg=2086.63, stdev=481.56
    clat percentiles (usec):
     |  1.00th=[ 1448],  5.00th=[ 1544], 10.00th=[ 1592], 20.00th=[ 1688],
     | 30.00th=[ 1752], 40.00th=[ 1832], 50.00th=[ 1928], 60.00th=[ 2040],
     | 70.00th=[ 2224], 80.00th=[ 2416], 90.00th=[ 2704], 95.00th=[ 2992],
     | 99.00th=[ 3632], 99.50th=[ 3888], 99.90th=[ 4512], 99.95th=[ 4832],
     | 99.99th=[ 5472]
    bw (KB  /s): min=125915, max=183202, per=100.00%, avg=148919.00, stdev=17636.33
  write: io=841688KB, bw=37383KB/s, iops=6113, runt= 22515msec
    slat (usec): min=15, max=609, avg=48.48, stdev=30.02
    clat (usec): min=543, max=6609, avg=2068.27, stdev=477.24
     lat (usec): min=565, max=6802, avg=2117.04, stdev=489.29
    clat percentiles (usec):
     |  1.00th=[ 1448],  5.00th=[ 1544], 10.00th=[ 1608], 20.00th=[ 1688],
     | 30.00th=[ 1752], 40.00th=[ 1832], 50.00th=[ 1928], 60.00th=[ 2064],
     | 70.00th=[ 2224], 80.00th=[ 2416], 90.00th=[ 2704], 95.00th=[ 2992],
     | 99.00th=[ 3632], 99.50th=[ 3888], 99.90th=[ 4576], 99.95th=[ 4832],
     | 99.99th=[ 5536]
    bw (KB  /s): min=30439, max=45679, per=100.00%, avg=37386.71, stdev=4546.60
    lat (usec) : 4=0.01%, 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%
    lat (usec) : 750=0.01%, 1000=0.01%
    lat (msec) : 2=56.79%, 4=42.84%, 10=0.36%
  cpu          : usr=7.78%, sys=92.21%, ctx=34, majf=0, minf=4088
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=550156/w=137644/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=3274.5MB, aggrb=148906KB/s, minb=148906KB/s, maxb=148906KB/s, mint=22515msec, maxt=22515msec
  WRITE: io=841688KB, aggrb=37383KB/s, minb=37383KB/s, maxb=37383KB/s, mint=22515msec, maxt=22515msec

root@proxmox4 ~ > zfs destroy rpool/testdisk
 
Thank you for your time and responses LnxBil.

Would you be so kind and please run:
Code:
fio --filename=brisi --sync=1 --rw=write --bs=10M --numjobs=1 --iodepth=1 --size=3000MB --name=test
inside Linux KVM VM on ZVOL? Preferably in CentOS 7 with XFS, but any Linux KVM instance would do, as long as you run the test inside it.
For comparison, you can also run it outside, as you have in your last post or even directly on root mounted fs in proxmox host.

In my current test, i'm interested in sequential write throughput, because it is related to specific use case.
However, if you are interested in my iometer-file-access-server.fio template results, I can do it next week and post them for you.
 
Running you test is kind of strange for SSDs, because it is more suited for hard disks. I get really lousy performance there with numjobs=1.

Code:
root@proxmox4 ~ > fio --filename=/dev/zvol/rpool/testdisk --sync=1 --rw=write --bs=10M --numjobs=1 --iodepth=1 --size=3000MB --name=test
test: (g=0): rw=write, bs=10M-10M/10M-10M/10M-10M, ioengine=sync, iodepth=1
fio-2.1.11
Starting 1 process
Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/0KB/0KB /s] [0/0/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=6372: Fri Sep 23 15:38:30 2016
  write: io=3000.0MB, bw=93721KB/s, iops=9, runt= 32778msec
    clat (msec): min=90, max=182, avg=108.31, stdev=20.39
     lat (msec): min=91, max=183, avg=109.23, stdev=20.73
    clat percentiles (msec):
     |  1.00th=[   92],  5.00th=[   92], 10.00th=[   93], 20.00th=[   93],
     | 30.00th=[   94], 40.00th=[   95], 50.00th=[   96], 60.00th=[  105],
     | 70.00th=[  117], 80.00th=[  126], 90.00th=[  137], 95.00th=[  151],
     | 99.00th=[  178], 99.50th=[  182], 99.90th=[  184], 99.95th=[  184],
     | 99.99th=[  184]
    bw (KB  /s): min=88888, max=109518, per=100.00%, avg=93968.17, stdev=4143.25
    lat (msec) : 100=56.00%, 250=44.00%
  cpu          : usr=0.77%, sys=80.34%, ctx=1262, majf=0, minf=10280
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=300/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=3000.0MB, aggrb=93721KB/s, minb=93721KB/s, maxb=93721KB/s, mint=32778msec, maxt=32778msec

SSDs and also virtualization environments operate on multiple I/O streams, so it is best to exactly test this:

Code:
root@proxmox4 ~ > fio --filename=/dev/zvol/rpool/testdisk --sync=1 --rw=write --bs=10M --numjobs=32 --iodepth=1 --runtime=30 --time_based --group_reporting --name test
test: (g=0): rw=write, bs=10M-10M/10M-10M/10M-10M, ioengine=sync, iodepth=1
...
fio-2.1.11
Starting 32 processes
Jobs: 32 (f=32): [W(32)] [100.0% done] [0KB/980.0MB/0KB /s] [0/98/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=32): err= 0: pid=5824: Fri Sep 23 15:36:27 2016
  write: io=35010MB, bw=1162.4MB/s, iops=116, runt= 30121msec
    clat (msec): min=96, max=508, avg=270.92, stdev=76.65
     lat (msec): min=98, max=510, avg=275.18, stdev=76.86
    clat percentiles (msec):
     |  1.00th=[  116],  5.00th=[  137], 10.00th=[  161], 20.00th=[  202],
     | 30.00th=[  235], 40.00th=[  255], 50.00th=[  273], 60.00th=[  293],
     | 70.00th=[  310], 80.00th=[  330], 90.00th=[  367], 95.00th=[  404],
     | 99.00th=[  461], 99.50th=[  474], 99.90th=[  494], 99.95th=[  506],
     | 99.99th=[  510]
    bw (KB  /s): min=20480, max=65536, per=3.14%, avg=37360.49, stdev=7518.28
    lat (msec) : 100=0.09%, 250=36.82%, 500=63.04%, 750=0.06%
  cpu          : usr=1.06%, sys=19.70%, ctx=2121333, majf=0, minf=336888
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=3501/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=35010MB, aggrb=1162.4MB/s, minb=1162.4MB/s, maxb=1162.4MB/s, mint=30121msec, maxt=30121msec

I have only RHEL6 machines here with 2.6.32-573 test looks like:

Code:
[root@rhel6-deployment ~]# fio --filename=brisi --sync=1 --rw=write --bs=10M --numjobs=1 --iodepth=1 --size=3000MB --name=test
test: (g=0): rw=write, bs=10M-10M/10M-10M/10M-10M, ioengine=sync, iodepth=1
fio-2.0.13
Starting 1 process
Jobs: 1 (f=1): [W] [100.0% done] [0K/109.8M/0K /s] [0 /10 /0  iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=2116: Fri Sep 23 16:06:57 2016
  write: io=3000.0MB, bw=109472KB/s, iops=10 , runt= 28062msec
    clat (msec): min=64 , max=218 , avg=92.38, stdev=18.00
     lat (msec): min=65 , max=219 , avg=93.52, stdev=18.17
    clat percentiles (msec):
     |  1.00th=[   71],  5.00th=[   75], 10.00th=[   76], 20.00th=[   80],
     | 30.00th=[   82], 40.00th=[   85], 50.00th=[   89], 60.00th=[   92],
     | 70.00th=[   96], 80.00th=[  103], 90.00th=[  117], 95.00th=[  127],
     | 99.00th=[  147], 99.50th=[  176], 99.90th=[  219], 99.95th=[  219],
     | 99.99th=[  219]
    bw (KB/s)  : min=80000, max=129620, per=100.00%, avg=109649.10, stdev=9153.19
    lat (msec) : 100=76.00%, 250=24.00%
  cpu          : usr=1.23%, sys=38.54%, ctx=997, majf=1, minf=24
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=300/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=3000.0MB, aggrb=109471KB/s, minb=109471KB/s, maxb=109471KB/s, mint=28062msec, maxt=28062msec

Disk stats (read/write):
    dm-0: ios=71/763338, merge=0/0, ticks=14/11809165, in_queue=11809199, util=65.99%, aggrios=83/8491, aggrmerge=0/759975, aggrticks=18/137787, aggrin_queue=137798, aggrutil=65.97%
  vda: ios=83/8491, merge=0/759975, ticks=18/137787, in_queue=137798, util=65.97%

The VM has only 4 processors, cache=none and iothread activated, so multiple streams increase the throughput as well:

Code:
[root@rhel6-deployment ~]# fio --filename=brisi --sync=1 --rw=write --bs=10M --numjobs=32 --iodepth=1 --runtime=30 --time_based --group_reporting --name test
test: (g=0): rw=write, bs=10M-10M/10M-10M/10M-10M, ioengine=sync, iodepth=1
...
test: (g=0): rw=write, bs=10M-10M/10M-10M/10M-10M, ioengine=sync, iodepth=1
fio-2.0.13
Starting 32 processes
Jobs: 15 (f=15): [_W_WWWWW_WWWW_____W_W__WW___W___] [5.9% done] [0K/249.8M/0K /s] [0 /24 /0  iops] [eta 08m:29s]
test: (groupid=0, jobs=32): err= 0: pid=2119: Fri Sep 23 16:07:42 2016
  write: io=6160.0MB, bw=203919KB/s, iops=19 , runt= 30933msec
    clat (msec): min=81 , max=2758 , avg=1581.37, stdev=609.35
     lat (msec): min=82 , max=2759 , avg=1584.55, stdev=609.58
    clat percentiles (msec):
     |  1.00th=[   99],  5.00th=[  799], 10.00th=[  947], 20.00th=[ 1057],
     | 30.00th=[ 1172], 40.00th=[ 1254], 50.00th=[ 1467], 60.00th=[ 1926],
     | 70.00th=[ 2073], 80.00th=[ 2180], 90.00th=[ 2311], 95.00th=[ 2442],
     | 99.00th=[ 2573], 99.50th=[ 2638], 99.90th=[ 2769], 99.95th=[ 2769],
     | 99.99th=[ 2769]
    bw (KB/s)  : min= 3711, max=22555, per=3.52%, avg=7184.43, stdev=2714.09
    lat (msec) : 100=1.46%, 250=2.76%, 500=0.16%, 750=0.16%, 1000=10.71%
    lat (msec) : 2000=49.03%, >=2000=35.71%
  cpu          : usr=0.22%, sys=2.03%, ctx=64731, majf=0, minf=937
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=616/d=0, short=r=0/w=0/d=0

Run status group 0 (all jobs):
  WRITE: io=6160.0MB, aggrb=203919KB/s, minb=203919KB/s, maxb=203919KB/s, mint=30933msec, maxt=30933msec

Disk stats (read/write):
    dm-0: ios=16/1565568, merge=0/0, ticks=11/23002095, in_queue=23015505, util=88.08%, aggrios=16/18841, aggrmerge=0/1554428, aggrticks=11/278525, aggrin_queue=278508, aggrutil=87.98%
  vda: ios=16/18841, merge=0/1554428, ticks=11/278525, in_queue=278508, util=87.98%
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!