FSYNCS/SECOND: 5.19

losta87

Member
Aug 4, 2020
4
0
21
37
Hello, first of all thank any response that helps me to solve the problem.

I have read that it is not the best to have a software raid configured and I have it like this.

The specific test:

root @ proxmox01 ~ # pveperf / var / lib / vz /
CPU BOGOMIPS: 54404.16
REGEX / SECOND: 3304415
HD SIZE: 3633.81 GB (/ dev / md2)
BUFFERED READS: 64.48 MB / sec
AVERAGE SEEK TIME: 36.69 ms
FSYNCS / SECOND: 5.19
EXT DNS: 15.41 ms


RAID:

root@proxmox01 ~ # mdadm --detail /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Wed Jan 8 15:50:03 2020
Raid Level : raid0
Array Size : 3872157696 (3692.78 GiB 3965.09 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Wed Jan 8 15:50:03 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Chunk Size : 512K

Consistency Policy : none

Name : rescue:2
UUID : d09da0ee:6413061f:ddc4876d:9f375754
Events : 0

Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
 
Well, mdraid is not supported by us. But best run a storage benchmark to see what you get out of it.
https://pve.proxmox.com/wiki/Benchmarking_Storage



This is the result, is it correct? Does proxmox work well this way?



Code:
root@proxmox01 ~ # fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --filename=/dev/md2
seq_read: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.12
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=74.0MiB/s][r=18.9k IOPS][eta 00m:00s]
seq_read: (groupid=0, jobs=1): err= 0: pid=29268: Wed Aug  5 10:21:10 2020
  read: IOPS=17.7k, BW=69.2MiB/s (72.5MB/s)(4150MiB/60001msec)
    slat (usec): min=2, max=233, avg= 3.65, stdev= 2.09
    clat (nsec): min=1124, max=79172k, avg=51668.47, stdev=551235.38
     lat (usec): min=33, max=79203, avg=55.41, stdev=551.35
    clat percentiles (usec):
     |  1.00th=[   32],  5.00th=[   33], 10.00th=[   33], 20.00th=[   34],
     | 30.00th=[   35], 40.00th=[   36], 50.00th=[   37], 60.00th=[   39],
     | 70.00th=[   40], 80.00th=[   41], 90.00th=[   43], 95.00th=[   47],
     | 99.00th=[   64], 99.50th=[   97], 99.90th=[  412], 99.95th=[11207],
     | 99.99th=[26084]
   bw (  KiB/s): min=  656, max=94912, per=100.00%, avg=71408.54, stdev=16317.65, samples=119
   iops        : min=  164, max=23728, avg=17852.10, stdev=4079.41, samples=119
  lat (usec)   : 2=0.01%, 10=0.01%, 20=0.01%, 50=96.98%, 100=2.54%
  lat (usec)   : 250=0.38%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.03%, 20=0.03%, 50=0.02%
  lat (msec)   : 100=0.01%
  cpu          : usr=3.30%, sys=9.32%, ctx=1062413, majf=0, minf=12
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=1062273,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=69.2MiB/s (72.5MB/s), 69.2MiB/s-69.2MiB/s (72.5MB/s-72.5MB/s), io=4150MiB (4351MB), run=60001-60001msec

Disk stats (read/write):
    md2: ios=1062067/2401, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=531292/1086, aggrmerge=182/409, aggrticks=29092/11802, aggrin_queue=18760, aggrutil=70.58%
  sdb: ios=531221/975, merge=92/406, ticks=27989/12417, in_queue=18468, util=69.70%
  sda: ios=531363/1197, merge=272/413, ticks=30195/11188, in_queue=19052, util=70.58%




Code:
root@proxmox01 ~ # fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=1M --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --filename=/dev/md2
seq_read: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1
fio-3.12
Starting 1 process
Jobs: 1 (f=1): [R(1)][100.0%][r=202MiB/s][r=202 IOPS][eta 00m:00s]
seq_read: (groupid=0, jobs=1): err= 0: pid=29871: Wed Aug  5 10:23:05 2020
  read: IOPS=214, BW=214MiB/s (225MB/s)(12.6GiB/60088msec)
    slat (usec): min=25, max=23212, avg=49.68, stdev=204.50
    clat (usec): min=952, max=223072, avg=4594.92, stdev=12852.71
     lat (usec): min=993, max=223108, avg=4644.86, stdev=12857.21
    clat percentiles (usec):
     |  1.00th=[   996],  5.00th=[  1004], 10.00th=[  1012], 20.00th=[  1037],
     | 30.00th=[  2769], 40.00th=[  2900], 50.00th=[  2999], 60.00th=[  3032],
     | 70.00th=[  3392], 80.00th=[  3654], 90.00th=[  3720], 95.00th=[  8291],
     | 99.00th=[ 63177], 99.50th=[122160], 99.90th=[143655], 99.95th=[152044],
     | 99.99th=[204473]
   bw (  KiB/s): min=49152, max=307200, per=100.00%, avg=219679.23, stdev=56072.67, samples=120
   iops        : min=   48, max=  300, avg=214.53, stdev=54.76, samples=120
  lat (usec)   : 1000=2.59%
  lat (msec)   : 2=22.02%, 4=68.78%, 10=2.05%, 20=2.11%, 50=1.41%
  lat (msec)   : 100=0.09%, 250=0.96%
  cpu          : usr=0.10%, sys=1.36%, ctx=12899, majf=0, minf=267
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=12873,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=214MiB/s (225MB/s), 214MiB/s-214MiB/s (225MB/s-225MB/s), io=12.6GiB (13.5GB), run=60088-60088msec

Disk stats (read/write):
    md2: ios=26309/1767, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=13016/970, aggrmerge=157/202, aggrticks=45721/20139, aggrin_queue=40554, aggrutil=61.85%
  sdb: ios=13006/927, merge=92/198, ticks=42981/19489, in_queue=38904, util=61.85%
  sda: ios=13027/1013, merge=222/206, ticks=48461/20790, in_queue=42204, util=61.84%
 
This is the result, is it correct? Does proxmox work well this way?
That's a very broad question. And it needs to be answered by yourselves.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!