Performance ZFS

spaxxilein

Active Member
Oct 8, 2019
31
3
28
35
Moin!

Wir haben folgendes Setup:

EPYC Server 7282
128GB ECC RAM
6x Exos 7E8 6TB in 2x RaidZ1, je 3 Platten /VDEV
3x Consumer SSD 1TB im RaidZ1
10Gbit Intel X520-DA2

Wenn ich von unserem extern angeschlossenen NAS (ebenfalls ausreichend Performance) schreibe / lese, egal ob vom SSD-Raid oder vom HDD-Raid kriege ich nur ca. 150mb/s über die 10GBIT Leistung. Teilweise ist die Leistung auch nur bei 50MB/s und die Übetragung stockt über mehrere Sekunden.

Die Netzwerkleistung habe ich mit Iperf getestet, da gibts 9,4Gbit/s.

Kann mir irgendwer erklären woran das liegen könnte?

Beste Grüße,

spaxxilein
 
Schon mal direkt Benchmarks auf die Pools gemacht? https://pve.proxmox.com/wiki/Benchmarking_Storage

Code:
fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --filename=/hddpool/test

Kann ich den Befehl so ausführen wenn mein HDD-Pool unter /hddpool läuft?

Hab jetzt mal so getestet:

Code:
root@pmx:/hddpool# fio --filename=test --sync=1 --rw=read --bs=4k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm test
test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=4
fio-3.12
Starting 1 process
test: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [R(1)][100.0%][r=1538MiB/s][r=394k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=10029: Fri Jul 24 11:12:38 2020
  read: IOPS=200k, BW=782MiB/s (820MB/s)(10.0GiB/13097msec)
    clat (nsec): min=1490, max=314201k, avg=4674.13, stdev=300059.77
     lat (nsec): min=1530, max=314201k, avg=4720.09, stdev=300060.46
    clat percentiles (nsec):
     |  1.00th=[   1528],  5.00th=[   1544], 10.00th=[   1544],
     | 20.00th=[   1560], 30.00th=[   1576], 40.00th=[   1576],
     | 50.00th=[   1576], 60.00th=[   1592], 70.00th=[   1608],
     | 80.00th=[   1736], 90.00th=[   1848], 95.00th=[   2288],
     | 99.00th=[  21632], 99.50th=[  24448], 99.90th=[  58112],
     | 99.95th=[ 872448], 99.99th=[4079616]
   bw (  KiB/s): min=70656, max=1578840, per=99.29%, avg=794965.50, stdev=512318.38, samples=26
   iops        : min=17664, max=394710, avg=198741.42, stdev=128079.64, samples=26
  lat (usec)   : 2=92.57%, 4=4.13%, 10=0.14%, 20=1.34%, 50=1.67%
  lat (usec)   : 100=0.06%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.02%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
  lat (msec)   : 100=0.01%, 250=0.01%, 500=0.01%
  cpu          : usr=8.33%, sys=44.42%, ctx=2237, majf=0, minf=12
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2621440,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=782MiB/s (820MB/s), 782MiB/s-782MiB/s (820MB/s-820MB/s), io=10.0GiB (10.7GB), run=13097-13097msec
 
sollte gehen, vielleicht auch ein zweites mal mit 8k blocksize probieren.
 
Erster Test Sequential Read 4K Blocksize:

Code:
root@pmx:/hddpool# fio --filename=test --sync=1 --rw=read --bs=4k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm test
test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=4
fio-3.12
Starting 1 process
test: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [R(1)][100.0%][r=1538MiB/s][r=394k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=10029: Fri Jul 24 11:12:38 2020
  read: IOPS=200k, BW=782MiB/s (820MB/s)(10.0GiB/13097msec)
    clat (nsec): min=1490, max=314201k, avg=4674.13, stdev=300059.77
     lat (nsec): min=1530, max=314201k, avg=4720.09, stdev=300060.46
    clat percentiles (nsec):
     |  1.00th=[   1528],  5.00th=[   1544], 10.00th=[   1544],
     | 20.00th=[   1560], 30.00th=[   1576], 40.00th=[   1576],
     | 50.00th=[   1576], 60.00th=[   1592], 70.00th=[   1608],
     | 80.00th=[   1736], 90.00th=[   1848], 95.00th=[   2288],
     | 99.00th=[  21632], 99.50th=[  24448], 99.90th=[  58112],
     | 99.95th=[ 872448], 99.99th=[4079616]
   bw (  KiB/s): min=70656, max=1578840, per=99.29%, avg=794965.50, stdev=512318.38, samples=26
   iops        : min=17664, max=394710, avg=198741.42, stdev=128079.64, samples=26
  lat (usec)   : 2=92.57%, 4=4.13%, 10=0.14%, 20=1.34%, 50=1.67%
  lat (usec)   : 100=0.06%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.02%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
  lat (msec)   : 100=0.01%, 250=0.01%, 500=0.01%
  cpu          : usr=8.33%, sys=44.42%, ctx=2237, majf=0, minf=12
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2621440,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=782MiB/s (820MB/s), 782MiB/s-782MiB/s (820MB/s-820MB/s), io=10.0GiB (10.7GB), run=13097-13097msec

Zweiter Test Sequential Read nochmal 4K Blocksize:
Code:
root@pmx:/hddpool# fio --filename=test --sync=1 --rw=read --bs=4k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm test
test: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=4
fio-3.12
Starting 1 process
test: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [R(1)][100.0%][r=1530MiB/s][r=392k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=32599: Fri Jul 24 11:13:59 2020
  read: IOPS=391k, BW=1526MiB/s (1600MB/s)(10.0GiB/6710msec)
    clat (nsec): min=1500, max=161098, avg=2241.68, stdev=3502.15
     lat (nsec): min=1530, max=244007, avg=2283.77, stdev=3505.92
    clat percentiles (nsec):
     |  1.00th=[ 1528],  5.00th=[ 1544], 10.00th=[ 1544], 20.00th=[ 1544],
     | 30.00th=[ 1560], 40.00th=[ 1576], 50.00th=[ 1576], 60.00th=[ 1576],
     | 70.00th=[ 1592], 80.00th=[ 1672], 90.00th=[ 1784], 95.00th=[ 2256],
     | 99.00th=[21376], 99.50th=[22400], 99.90th=[25984], 99.95th=[30080],
     | 99.99th=[50944]
   bw (  MiB/s): min= 1474, max= 1565, per=99.99%, avg=1525.97, stdev=29.03, samples=13
   iops        : min=377498, max=400672, avg=390648.85, stdev=7431.41, samples=13
  lat (usec)   : 2=94.03%, 4=2.72%, 10=0.10%, 20=1.03%, 50=2.11%
  lat (usec)   : 100=0.01%, 250=0.01%
  cpu          : usr=18.17%, sys=81.73%, ctx=203, majf=0, minf=11
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2621440,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=1526MiB/s (1600MB/s), 1526MiB/s-1526MiB/s (1600MB/s-1600MB/s), io=10.0GiB (10.7GB), run=6710-6710msec

Erster Test 8K Blocksize:
Code:
root@pmx:/hddpool# fio --filename=test --sync=1 --rw=read --bs=8k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm test
test: (g=0): rw=read, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=psync, iodepth=4
fio-3.12
Starting 1 process
test: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [R(1)][100.0%][r=2417MiB/s][r=309k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=20051: Fri Jul 24 11:14:37 2020
  read: IOPS=308k, BW=2403MiB/s (2519MB/s)(10.0GiB/4262msec)
    clat (nsec): min=1600, max=197136, avg=2932.52, stdev=4638.31
     lat (nsec): min=1640, max=197176, avg=2977.28, stdev=4638.54
    clat percentiles (nsec):
     |  1.00th=[ 1640],  5.00th=[ 1656], 10.00th=[ 1656], 20.00th=[ 1672],
     | 30.00th=[ 1688], 40.00th=[ 1688], 50.00th=[ 1704], 60.00th=[ 1720],
     | 70.00th=[ 1768], 80.00th=[ 1800], 90.00th=[ 2192], 95.00th=[19072],
     | 99.00th=[22144], 99.50th=[23168], 99.90th=[25728], 99.95th=[27776],
     | 99.99th=[44288]
   bw (  MiB/s): min= 2347, max= 2419, per=100.00%, avg=2404.28, stdev=23.92, samples=8
   iops        : min=300512, max=309722, avg=307748.50, stdev=3062.18, samples=8
  lat (usec)   : 2=88.35%, 4=5.33%, 10=0.05%, 20=2.93%, 50=3.34%
  lat (usec)   : 100=0.01%, 250=0.01%
  cpu          : usr=15.00%, sys=84.91%, ctx=27, majf=0, minf=12
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=1310720,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=2403MiB/s (2519MB/s), 2403MiB/s-2403MiB/s (2519MB/s-2519MB/s), io=10.0GiB (10.7GB), run=4262-4262msec

Zweiter Test Sequential Read 8K Blocksize:
Code:
root@pmx:/hddpool# fio --filename=test --sync=1 --rw=read --bs=8k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm test
test: (g=0): rw=read, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=psync, iodepth=4
fio-3.12
Starting 1 process
test: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [R(1)][100.0%][r=2392MiB/s][r=306k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=13369: Fri Jul 24 11:16:14 2020
  read: IOPS=300k, BW=2340MiB/s (2454MB/s)(10.0GiB/4376msec)
    clat (nsec): min=1580, max=3847.7k, avg=3020.20, stdev=5985.64
     lat (nsec): min=1630, max=3847.8k, avg=3065.89, stdev=5985.86
    clat percentiles (nsec):
     |  1.00th=[ 1624],  5.00th=[ 1640], 10.00th=[ 1640], 20.00th=[ 1656],
     | 30.00th=[ 1656], 40.00th=[ 1672], 50.00th=[ 1688], 60.00th=[ 1704],
     | 70.00th=[ 1784], 80.00th=[ 1880], 90.00th=[ 2384], 95.00th=[19840],
     | 99.00th=[23168], 99.50th=[24192], 99.90th=[29056], 99.95th=[34560],
     | 99.99th=[68096]
   bw (  MiB/s): min= 2178, max= 2389, per=100.00%, avg=2340.06, stdev=69.93, samples=8
   iops        : min=278892, max=305882, avg=299528.00, stdev=8951.72, samples=8
  lat (usec)   : 2=84.66%, 4=8.96%, 10=0.10%, 20=1.61%, 50=4.65%
  lat (usec)   : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01%
  lat (msec)   : 4=0.01%
  cpu          : usr=15.61%, sys=84.18%, ctx=95, majf=0, minf=12
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=1310720,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=2340MiB/s (2454MB/s), 2340MiB/s-2340MiB/s (2454MB/s-2454MB/s), io=10.0GiB (10.7GB), run=4376-4376msec

Diese Read-Werte kommen doch durch den ARC zustande oder?
 
Write Werte Sequential 4K:
Code:
root@pmx:/hddpool# fio --filename=test --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=10G --runtime=300 && rm test
test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=4
fio-3.12
Starting 1 process
test: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [W(1)][100.0%][w=448KiB/s][w=112 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=25277: Fri Jul 24 11:22:02 2020
  write: IOPS=89, BW=357KiB/s (365kB/s)(105MiB/300007msec); 0 zone resets
    clat (msec): min=3, max=811, avg=11.21, stdev=11.34
     lat (msec): min=3, max=811, avg=11.21, stdev=11.34
    clat percentiles (msec):
     |  1.00th=[    7],  5.00th=[    8], 10.00th=[    9], 20.00th=[    9],
     | 30.00th=[    9], 40.00th=[    9], 50.00th=[    9], 60.00th=[    9],
     | 70.00th=[    9], 80.00th=[   10], 90.00th=[   17], 95.00th=[   26],
     | 99.00th=[   59], 99.50th=[   79], 99.90th=[  120], 99.95th=[  142],
     | 99.99th=[  275]
   bw (  KiB/s): min=   32, max=  480, per=100.00%, avg=357.30, stdev=108.20, samples=599
   iops        : min=    8, max=  120, avg=89.28, stdev=27.04, samples=599
  lat (msec)   : 4=0.01%, 10=83.25%, 20=8.07%, 50=7.26%, 100=1.17%
  lat (msec)   : 250=0.22%, 500=0.01%, 1000=0.01%
  cpu          : usr=0.05%, sys=0.46%, ctx=53523, majf=0, minf=11
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,26758,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
  WRITE: bw=357KiB/s (365kB/s), 357KiB/s-357KiB/s (365kB/s-365kB/s), io=105MiB (110MB), run=300007-300007msec

Sieht sehr komisch aus oder?
 
Und hat hier irgendwer eine Idee woran es liegen könnte? Was mir auffällt ist das während des kopierens über LAN die iostats sehr stark schwangen. Teilweise schreiben die HDD Platten für eine Sekunde mit 600-800mb/s, dann fällt das ganze für 10s auf 1mb ab.

Woran kann dieses verhalten liegen?

BG
 
Spannend häng mich hier mal drann. Hatten so ähnliche komische Dinge. Das war der Grund warum wir nur mehr Raid10 einsetzen. Irgendwie bin ich mit dem RaidZ Zeugs nie warm geworden. ZFS Raid10 tut halt. Aufklärung wäre schon gut.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!