Ja 100% befinde mich im Moutpoint des RAID 10.du bist sicher, dass du mit fio auf dem raid10 ausführst...gebe doch filename=/mount/point/file an
Ja 100% befinde mich im Moutpoint des RAID 10.du bist sicher, dass du mit fio auf dem raid10 ausführst...gebe doch filename=/mount/point/file an
für throughput gibt es z.B. folgenden Test:
fio --rw=rndrw --name=rndrw --bs=1024k --size=200M --direct=1 --filename=/mount/point/file --numjobs=4 --ioengine=libaio --iodepth=32
fio --rw=randrw --name=rndrw --bs=1024k --size=200M --direct=1 --filename=/hdd-storage/test --numjobs=4 --ioengine=libaio --iodepth=32
rndrw: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
...
fio-3.33
Starting 4 processes
rndrw: Laying out IO file (1 file / 200MiB)
rndrw: (groupid=0, jobs=1): err= 0: pid=3083981: Wed Jun 19 19:54:32 2024
read: IOPS=115, BW=116MiB/s (122MB/s)(99.0MiB/854msec)
slat (usec): min=41, max=239, avg=75.25, stdev=30.61
clat (msec): min=14, max=327, avg=123.87, stdev=81.85
lat (msec): min=14, max=327, avg=123.94, stdev=81.85
clat percentiles (msec):
| 1.00th=[ 15], 5.00th=[ 15], 10.00th=[ 40], 20.00th=[ 89],
| 30.00th=[ 95], 40.00th=[ 99], 50.00th=[ 100], 60.00th=[ 102],
| 70.00th=[ 116], 80.00th=[ 125], 90.00th=[ 300], 95.00th=[ 300],
| 99.00th=[ 330], 99.50th=[ 330], 99.90th=[ 330], 99.95th=[ 330],
| 99.99th=[ 330]
bw ( KiB/s): min=63488, max=63488, per=13.82%, avg=63488.00, stdev= 0.00, samples=1
iops : min= 62, max= 62, avg=62.00, stdev= 0.00, samples=1
write: IOPS=118, BW=118MiB/s (124MB/s)(101MiB/854msec); 0 zone resets
slat (usec): min=122, max=227520, avg=8231.77, stdev=24260.81
clat (msec): min=14, max=327, avg=130.19, stdev=83.97
lat (msec): min=14, max=350, avg=138.42, stdev=86.39
clat percentiles (msec):
| 1.00th=[ 15], 5.00th=[ 39], 10.00th=[ 66], 20.00th=[ 90],
| 30.00th=[ 95], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 116],
| 70.00th=[ 122], 80.00th=[ 144], 90.00th=[ 300], 95.00th=[ 321],
| 99.00th=[ 330], 99.50th=[ 330], 99.90th=[ 330], 99.95th=[ 330],
| 99.99th=[ 330]
bw ( KiB/s): min=75776, max=75776, per=15.15%, avg=75776.00, stdev= 0.00, samples=1
iops : min= 74, max= 74, avg=74.00, stdev= 0.00, samples=1
lat (msec) : 20=5.00%, 50=5.50%, 100=39.50%, 250=34.50%, 500=15.50%
cpu : usr=1.99%, sys=1.88%, ctx=409, majf=0, minf=11
IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=84.5%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.6%, 64=0.0%, >=64=0.0%
issued rwts: total=99,101,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
rndrw: (groupid=0, jobs=1): err= 0: pid=3083982: Wed Jun 19 19:54:32 2024
read: IOPS=109, BW=110MiB/s (115MB/s)(93.0MiB/848msec)
slat (usec): min=41, max=137, avg=67.45, stdev=21.71
clat (msec): min=8, max=327, avg=129.31, stdev=88.97
lat (msec): min=8, max=327, avg=129.38, stdev=88.96
clat percentiles (msec):
| 1.00th=[ 9], 5.00th=[ 60], 10.00th=[ 72], 20.00th=[ 78],
| 30.00th=[ 78], 40.00th=[ 94], 50.00th=[ 99], 60.00th=[ 101],
| 70.00th=[ 107], 80.00th=[ 126], 90.00th=[ 305], 95.00th=[ 326],
| 99.00th=[ 330], 99.50th=[ 330], 99.90th=[ 330], 99.95th=[ 330],
| 99.99th=[ 330]
bw ( KiB/s): min=69632, max=69632, per=15.16%, avg=69632.00, stdev= 0.00, samples=1
iops : min= 68, max= 68, avg=68.00, stdev= 0.00, samples=1
write: IOPS=126, BW=126MiB/s (132MB/s)(107MiB/848msec); 0 zone resets
slat (usec): min=141, max=227120, avg=7774.02, stdev=23627.32
clat (msec): min=8, max=344, avg=119.70, stdev=83.08
lat (msec): min=8, max=345, avg=127.47, stdev=84.89
clat percentiles (msec):
| 1.00th=[ 9], 5.00th=[ 33], 10.00th=[ 69], 20.00th=[ 77],
| 30.00th=[ 83], 40.00th=[ 94], 50.00th=[ 95], 60.00th=[ 101],
| 70.00th=[ 106], 80.00th=[ 121], 90.00th=[ 321], 95.00th=[ 330],
| 99.00th=[ 347], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347],
| 99.99th=[ 347]
bw ( KiB/s): min=86016, max=86016, per=17.20%, avg=86016.00, stdev= 0.00, samples=1
iops : min= 84, max= 84, avg=84.00, stdev= 0.00, samples=1
lat (msec) : 10=1.50%, 50=3.00%, 100=54.00%, 250=26.00%, 500=15.50%
cpu : usr=2.24%, sys=1.65%, ctx=423, majf=0, minf=12
IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=84.5%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.6%, 64=0.0%, >=64=0.0%
issued rwts: total=93,107,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
rndrw: (groupid=0, jobs=1): err= 0: pid=3083983: Wed Jun 19 19:54:32 2024
read: IOPS=112, BW=112MiB/s (118MB/s)(94.0MiB/838msec)
slat (usec): min=40, max=227, avg=70.24, stdev=28.36
clat (msec): min=21, max=328, avg=118.29, stdev=82.81
lat (msec): min=21, max=328, avg=118.36, stdev=82.81
clat percentiles (msec):
| 1.00th=[ 22], 5.00th=[ 50], 10.00th=[ 72], 20.00th=[ 75],
| 30.00th=[ 78], 40.00th=[ 94], 50.00th=[ 95], 60.00th=[ 99],
| 70.00th=[ 101], 80.00th=[ 116], 90.00th=[ 305], 95.00th=[ 326],
| 99.00th=[ 330], 99.50th=[ 330], 99.90th=[ 330], 99.95th=[ 330],
| 99.99th=[ 330]
bw ( KiB/s): min=67584, max=67584, per=14.72%, avg=67584.00, stdev= 0.00, samples=1
iops : min= 66, max= 66, avg=66.00, stdev= 0.00, samples=1
write: IOPS=126, BW=126MiB/s (133MB/s)(106MiB/838msec); 0 zone resets
slat (usec): min=134, max=227184, avg=7625.16, stdev=23603.92
clat (msec): min=21, max=350, avg=129.30, stdev=90.27
lat (msec): min=21, max=351, avg=136.93, stdev=92.07
clat percentiles (msec):
| 1.00th=[ 22], 5.00th=[ 50], 10.00th=[ 71], 20.00th=[ 75],
| 30.00th=[ 79], 40.00th=[ 95], 50.00th=[ 100], 60.00th=[ 102],
| 70.00th=[ 116], 80.00th=[ 127], 90.00th=[ 321], 95.00th=[ 326],
| 99.00th=[ 330], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 351],
| 99.99th=[ 351]
bw ( KiB/s): min=79872, max=79872, per=15.97%, avg=79872.00, stdev= 0.00, samples=1
iops : min= 78, max= 78, avg=78.00, stdev= 0.00, samples=1
lat (msec) : 50=9.00%, 100=51.00%, 250=24.50%, 500=15.50%
cpu : usr=2.51%, sys=1.43%, ctx=420, majf=0, minf=11
IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=84.5%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.6%, 64=0.0%, >=64=0.0%
issued rwts: total=94,106,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
rndrw: (groupid=0, jobs=1): err= 0: pid=3083984: Wed Jun 19 19:54:32 2024
read: IOPS=113, BW=114MiB/s (120MB/s)(97.0MiB/851msec)
slat (usec): min=40, max=196, avg=71.47, stdev=27.59
clat (msec): min=14, max=305, avg=126.46, stdev=73.90
lat (msec): min=14, max=305, avg=126.53, stdev=73.90
clat percentiles (msec):
| 1.00th=[ 15], 5.00th=[ 36], 10.00th=[ 70], 20.00th=[ 87],
| 30.00th=[ 94], 40.00th=[ 99], 50.00th=[ 102], 60.00th=[ 104],
| 70.00th=[ 116], 80.00th=[ 144], 90.00th=[ 284], 95.00th=[ 300],
| 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305],
| 99.99th=[ 305]
bw ( KiB/s): min=63615, max=63615, per=13.85%, avg=63615.00, stdev= 0.00, samples=1
iops : min= 62, max= 62, avg=62.00, stdev= 0.00, samples=1
write: IOPS=121, BW=121MiB/s (127MB/s)(103MiB/851msec); 0 zone resets
slat (usec): min=129, max=223806, avg=8067.89, stdev=23735.10
clat (msec): min=12, max=324, avg=125.83, stdev=76.45
lat (msec): min=12, max=324, avg=133.90, stdev=78.36
clat percentiles (msec):
| 1.00th=[ 13], 5.00th=[ 37], 10.00th=[ 64], 20.00th=[ 86],
| 30.00th=[ 95], 40.00th=[ 97], 50.00th=[ 100], 60.00th=[ 103],
| 70.00th=[ 125], 80.00th=[ 144], 90.00th=[ 284], 95.00th=[ 296],
| 99.00th=[ 305], 99.50th=[ 326], 99.90th=[ 326], 99.95th=[ 326],
| 99.99th=[ 326]
bw ( KiB/s): min=73875, max=73875, per=14.77%, avg=73875.00, stdev= 0.00, samples=1
iops : min= 72, max= 72, avg=72.00, stdev= 0.00, samples=1
lat (msec) : 20=2.50%, 50=3.50%, 100=45.00%, 250=33.50%, 500=15.50%
cpu : usr=2.35%, sys=1.53%, ctx=403, majf=0, minf=11
IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=84.5%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.6%, 64=0.0%, >=64=0.0%
issued rwts: total=97,103,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: bw=448MiB/s (470MB/s), 110MiB/s-116MiB/s (115MB/s-122MB/s), io=383MiB (402MB), run=838-854msec
WRITE: bw=488MiB/s (512MB/s), 118MiB/s-126MiB/s (124MB/s-133MB/s), io=417MiB (437MB), run=838-854msec
Disk stats (read/write):
md1: ios=1077/1229, merge=0/0, ticks=24588/47892, in_queue=72480, util=84.61%, aggrios=255/564, aggrmerge=0/9, aggrticks=4904/9921, aggrin_queue=15019, aggrutil=81.23%
sdh: ios=215/559, merge=0/15, ticks=4676/9293, in_queue=14170, util=75.38%
sdg: ios=297/559, merge=0/15, ticks=3573/8916, in_queue=12698, util=78.85%
sdf: ios=189/560, merge=0/3, ticks=2276/8956, in_queue=11403, util=72.69%
sde: ios=331/560, merge=0/3, ticks=8356/11660, in_queue=20215, util=81.23%
sdd: ios=245/574, merge=0/10, ticks=6327/10906, in_queue=17436, util=77.38%
sdc: ios=255/576, merge=0/8, ticks=4220/9800, in_queue=14197, util=75.07%
fio --rw=randrw --name=rndrw --bs=1024k --size=200M --direct=1 --filename=/hdd-storage/test --numjobs=1 --ioengine=libaio --iodepth=32
rndrw: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
fio-3.33
Starting 1 process
rndrw: Laying out IO file (1 file / 200MiB)
rndrw: (groupid=0, jobs=1): err= 0: pid=3719598: Thu Jun 20 12:33:53 2024
read: IOPS=529, BW=529MiB/s (555MB/s)(99.0MiB/187msec)
slat (usec): min=25, max=181, avg=49.10, stdev=29.47
clat (usec): min=2503, max=78039, avg=26422.71, stdev=16427.15
lat (usec): min=2639, max=78092, avg=26471.81, stdev=16422.63
clat percentiles (usec):
| 1.00th=[ 2507], 5.00th=[ 4752], 10.00th=[ 9241], 20.00th=[13304],
| 30.00th=[20841], 40.00th=[22152], 50.00th=[22676], 60.00th=[23725],
| 70.00th=[25560], 80.00th=[33817], 90.00th=[54789], 95.00th=[63177],
| 99.00th=[78119], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119],
| 99.99th=[78119]
write: IOPS=540, BW=540MiB/s (566MB/s)(101MiB/187msec); 0 zone resets
slat (usec): min=64, max=44533, avg=1368.45, stdev=5894.53
clat (usec): min=2478, max=78699, avg=30648.41, stdev=17894.93
lat (usec): min=2624, max=78813, avg=32016.86, stdev=17503.11
clat percentiles (usec):
| 1.00th=[ 2540], 5.00th=[ 4293], 10.00th=[13304], 20.00th=[21890],
| 30.00th=[23200], 40.00th=[23462], 50.00th=[25560], 60.00th=[26346],
| 70.00th=[33817], 80.00th=[43254], 90.00th=[60556], 95.00th=[71828],
| 99.00th=[77071], 99.50th=[79168], 99.90th=[79168], 99.95th=[79168],
| 99.99th=[79168]
lat (msec) : 4=3.50%, 10=6.00%, 20=12.50%, 50=64.50%, 100=13.50%
cpu : usr=4.30%, sys=6.45%, ctx=234, majf=0, minf=11
IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=84.5%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.6%, 64=0.0%, >=64=0.0%
issued rwts: total=99,101,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
READ: bw=529MiB/s (555MB/s), 529MiB/s-529MiB/s (555MB/s-555MB/s), io=99.0MiB (104MB), run=187-187msec
WRITE: bw=540MiB/s (566MB/s), 540MiB/s-540MiB/s (566MB/s-566MB/s), io=101MiB (106MB), run=187-187msec
Disk stats (read/write):
md1: ios=323/326, merge=0/0, ticks=3084/5204, in_queue=8288, util=57.76%, aggrios=66/134, aggrmerge=0/0, aggrticks=648/1524, aggrin_queue=2172, aggrutil=52.89%
sdh: ios=36/144, merge=0/0, ticks=144/1264, in_queue=1409, util=39.67%
sdg: ios=90/144, merge=0/0, ticks=555/1675, in_queue=2230, util=40.77%
sdf: ios=28/138, merge=0/0, ticks=257/1086, in_queue=1343, util=37.47%
sde: ios=96/138, merge=0/0, ticks=905/1804, in_queue=2709, util=44.08%
sdd: ios=36/122, merge=0/0, ticks=229/1016, in_queue=1245, util=36.36%
sdc: ios=110/122, merge=0/0, ticks=1799/2300, in_queue=4100, util=52.89%
Genau die gleiche Konfiguration gleiche Hardware, aber mit PBS auf separaten Server genau das gleiche. PVE und PBS mit 10Gbit Fiber verbunden.Also aus den Werten ergibt sich für mich nicht, warum es beim Backup so langsam ist. Problematisch ist auf jeden Fall, dass bei dir Quelle und Ziel beim Backup gleich ist. Wenn du das ändern könntest, sollte es definitiv schneller sein. Vielleicht kannst du mal einen Test machen mit einer USB-PLatte, die du dran hängst (USB3 aber bitte). Dann richtest du darauf ein Datastore ein und machst mal ein Testbackup.
Komplett identisch.naja, Äpfel mit Birnen zu vergleichen ist natürlich unsinnig. Eine Referenzmessung wäre natürlich nicht schlecht.
Erzähl doch noch ein bißchen zu deinem neuen PBS. Wie ist der aufgebaut? HW-Controller, Filesystem, .....?
Wäre auch interessant, wie die CPU-Auslastung bei einem Backup auf der Quelle ist.
We use essential cookies to make this site work, and optional cookies to enhance your experience.