Hardware u. Performance review.

Discussion in 'Proxmox VE (Deutsch)' started by franz78, Oct 11, 2018.

  1. franz78

    franz78 New Member

    Joined:
    Oct 11, 2018
    Messages:
    5
    Likes Received:
    0
    Hallo Community!

    Ich bin gerade dabei, mir einen Server auf Basis von Proxmox aufzubauen. Die Aufgaben beschränken sich größtenteils auf div. LABs

    Hardware
    :

    MB: Asus x99 WS usb/31
    CPU: Xeon E5 2630 v4
    Net: Intel X550 T2
    Ram: 98GB ECC Kingston
    Disk 6x SSD Samsung SM863a (240GB)
    SATA Controller Intel onboard AHCI Mode
    NVME: Boot Disk: Optane 16GB

    Aktuell Pool Config:

    zpool create -f -o ashift=12 rpool raidz1 /dev/disk/by-id/Disk1 ... raidz1 /dev/disk/by-id/Disk3 ...

    pool: rpool
    state: ONLINE
    scan: scrub repaired 0B in 0h2m with 0 errors on Thu Oct 11 11:15:03 2018
    config:

    NAME STATE READ WRITE CKSUM
    rpool ONLINE 0 0 0
    raidz1-0 ONLINE 0 0 0
    ata-SAMSUNG_MZ7KM240HMHQ-00005_ ONLINE 0 0 0
    ata-SAMSUNG_MZ7KM240HMHQ-00005_ ONLINE 0 0 0
    ata-SAMSUNG_MZ7KM240HMHQ-00005_ ONLINE 0 0 0
    raidz1-1 ONLINE 0 0 0
    ata-SAMSUNG_MZ7KM240HMHQ-00005_ ONLINE 0 0 0
    ata-SAMSUNG_MZ7KM240HMHQ-00005_ ONLINE 0 0 0
    ata-SAMSUNG_MZ7KM240HMHQ-00005_ ONLINE 0 0 0


    Disk Settings:

    /etc/hdparm.conf

    /dev/sda {
    write_cache = off
    }
    /dev/sdb {
    write_cache = off
    }
    /dev/sdc {
    write_cache = off
    }
    /dev/sdd {
    write_cache = off
    }
    /dev/sde {
    write_cache = off
    }
    /dev/sdf {
    write_cache = off
    }


    ZFS Settings:

    /etc/modprobe.d/zfs.conf
    ## > options zfs zfs_arc_max=34359738368 (32GB)

    zfs set compression=lz4 rpool
    zfs set atime=off rpool
    zfs set logbias=throughput rpool
    zfs set xattr=sa rpool
    zfs set sync=standard

    Dataset:
    zfs create rpool/SSD-vmdata-KVM
    zfs create rpool/SSD-vmdata-LXC
    zfs set canmount=off rpool/SSD-vmdata-KVM
    zfs set canmount=off rpool/SSD-vmdata-LXC

    pveperf /rpool hdparm -W 0 /dev/sd[a-f]
    CPU BOGOMIPS: 87942.00
    REGEX/SECOND: 2674683
    HD SIZE: 774.57 GB (rpool)
    FSYNCS/SECOND: 6673.97

    pveperf /rpool hdparm -W 1 /dev/sd[a-f]
    CPU BOGOMIPS: 87942.00
    REGEX/SECOND: 2865395
    HD SIZE: 774.57 GB (rpool)
    FSYNCS/SECOND: 5178.79


    Fio Test: hdparm -W 0 /dev/sdb

    WRITE:
    Test Command: fio --filename=/dev/sdb --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test

    journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
    fio-2.16
    Starting 1 process
    Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/146.8MB/0KB /s] [0/37.6K/0 iops] [eta 00m:00s]
    journal-test: (groupid=0, jobs=1): err= 0: pid=14492: Wed Oct 10 19:11:26 2018
    write: io=8744.6MB, bw=149229KB/s, iops=37307, runt= 60001msec
    clat (usec): min=24, max=9145, avg=26.38, stdev=11.21
    lat (usec): min=24, max=9145, avg=26.45, stdev=11.22
    clat percentiles (usec):
    | 1.00th=[ 25], 5.00th=[ 25], 10.00th=[ 25], 20.00th=[ 26],
    | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26],
    | 70.00th=[ 26], 80.00th=[ 26], 90.00th=[ 27], 95.00th=[ 27],
    | 99.00th=[ 37], 99.50th=[ 41], 99.90th=[ 61], 99.95th=[ 101],
    | 99.99th=[ 106]
    lat (usec) : 50=99.82%, 100=0.10%, 250=0.07%, 500=0.01%, 750=0.01%
    lat (usec) : 1000=0.01%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%
    cpu : usr=3.46%, sys=13.25%, ctx=2238490, majf=0, minf=9
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=0/w=2238478/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1

    Run status group 0 (all jobs):
    WRITE: io=8744.6MB, aggrb=149229KB/s, minb=149229KB/s, maxb=149229KB/s, mint=60001msec, maxt=60001msec

    Fio Test: hdparm -W 1 /dev/sdb

    Test Command: fio --filename=/dev/sdb --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test
    journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
    fio-2.16
    Starting 1 process
    Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/59456KB/0KB /s] [0/14.9K/0 iops] [eta 00m:00s]
    journal-test: (groupid=0, jobs=1): err= 0: pid=20931: Wed Oct 10 19:14:57 2018
    write: io=3448.3MB, bw=58850KB/s, iops=14712, runt= 60001msec
    clat (usec): min=62, max=4135, avg=67.52, stdev=15.30
    lat (usec): min=62, max=4135, avg=67.60, stdev=15.31
    clat percentiles (usec):
    | 1.00th=[ 64], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 65],
    | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 66], 60.00th=[ 66],
    | 70.00th=[ 67], 80.00th=[ 70], 90.00th=[ 73], 95.00th=[ 75],
    | 99.00th=[ 84], 99.50th=[ 90], 99.90th=[ 107], 99.95th=[ 117],
    | 99.99th=[ 167]
    lat (usec) : 100=99.80%, 250=0.19%, 500=0.01%, 750=0.01%, 1000=0.01%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%
    cpu : usr=0.88%, sys=5.80%, ctx=1765523, majf=0, minf=10
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=0/w=882759/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1

    Run status group 0 (all jobs):
    WRITE: io=3448.3MB, aggrb=58849KB/s, minb=58849KB/s, maxb=58849KB/s, mint=60001msec, maxt=60001msec

    Disk stats (read/write):
    sdb: ios=134/1762781, merge=0/0, ticks=88/50208, in_queue=50196, util=83.52%

    zvol Test: hdparm -W 0 /dev/sd[a-f]

    fio --filename=/dev/zvol/rpool/test-8k --direct=1 --sync=1 --rw=write --bs=4k --numjobs=4 --iodepth=32 --runtime=60 --time_based --group_reporting --name=journal-test
    journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=32
    ...
    fio-2.16
    Starting 4 processes
    Jobs: 4 (f=4): [W(4)] [10.0% done] [0KB/60304KB/0KB /s] [0/15.8K/0 iops] [eta 00m:54s]
    fio: terminating on signal 2

    journal-test: (groupid=0, jobs=4): err= 0: pid=26151: Wed Oct 10 19:52:04 2018
    write: io=401092KB, bw=60206KB/s, iops=15051, runt= 6662msec
    clat (usec): min=156, max=37649, avg=264.61, stdev=241.67
    lat (usec): min=156, max=37650, avg=264.76, stdev=241.67
    clat percentiles (usec):
    | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 249],
    | 30.00th=[ 255], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 266],
    | 70.00th=[ 266], 80.00th=[ 274], 90.00th=[ 278], 95.00th=[ 286],
    | 99.00th=[ 366], 99.50th=[ 446], 99.90th=[ 474], 99.95th=[ 524],
    | 99.99th=[ 956]
    lat (usec) : 250=20.86%, 500=79.07%, 750=0.04%, 1000=0.02%
    lat (msec) : 2=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
    cpu : usr=0.84%, sys=13.96%, ctx=234219, majf=0, minf=36
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=0/w=100273/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=32

    Run status group 0 (all jobs):
    WRITE: io=401092KB, aggrb=60205KB/s, minb=60205KB/s, maxb=60205KB/s, mint=6662msec, maxt=6662msec

    zvol Test: hdparm -W 1 /dev/sd[a-f]

    Starting 4 processes
    Jobs: 4 (f=4): [W(4)] [100.0% done] [0KB/42612KB/0KB /s] [0/10.7K/0 iops] [eta 00m:00s]
    journal-test: (groupid=0, jobs=4): err= 0: pid=3201: Wed Oct 10 19:54:09 2018
    write: io=2544.9MB, bw=43432KB/s, iops=10857, runt= 60001msec
    clat (usec): min=156, max=36846, avg=367.08, stdev=299.37
    lat (usec): min=156, max=36846, avg=367.25, stdev=299.37
    clat percentiles (usec):
    | 1.00th=[ 314], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 346],
    | 30.00th=[ 350], 40.00th=[ 354], 50.00th=[ 358], 60.00th=[ 362],
    | 70.00th=[ 366], 80.00th=[ 374], 90.00th=[ 386], 95.00th=[ 398],
    | 99.00th=[ 628], 99.50th=[ 684], 99.90th=[ 748], 99.95th=[ 828],
    | 99.99th=[ 9408]
    lat (usec) : 250=0.41%, 500=97.99%, 750=1.51%, 1000=0.06%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
    cpu : usr=0.57%, sys=10.72%, ctx=1530383, majf=0, minf=34
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=0/w=651485/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=1

    Run status group 0 (all jobs):
    WRITE: io=2544.9MB, aggrb=43431KB/s, minb=43431KB/s, maxb=43431KB/s, mint=60001msec, maxt=60001msec

    fio --filename=/dev/zvol/rpool/test-8k --direct=1 --sync=1 --rw=read --bs=4k --numjobs=4 --iodepth=32 --runtime=60 --time_based --group_reporting --name=journal-test
    journal-test: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=32
    ...
    fio-2.16
    Starting 4 processes
    Jobs: 4 (f=4): [R(4)] [100.0% done] [472.6MB/0KB/0KB /s] [121K/0/0 iops] [eta 00m:00s]
    journal-test: (groupid=0, jobs=4): err= 0: pid=29643: Thu Oct 11 14:16:43 2018
    read : io=33842MB, bw=577566KB/s, iops=144391, runt= 60001msec
    clat (usec): min=7, max=1985, avg=27.19, stdev= 4.49
    lat (usec): min=7, max=1985, avg=27.27, stdev= 4.49
    clat percentiles (usec):
    | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 25],
    | 30.00th=[ 25], 40.00th=[ 26], 50.00th=[ 27], 60.00th=[ 27],
    | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 31], 95.00th=[ 34],
    | 99.00th=[ 41], 99.50th=[ 44], 99.90th=[ 50], 99.95th=[ 52],
    | 99.99th=[ 70]
    lat (usec) : 10=0.03%, 20=1.60%, 50=98.25%, 100=0.11%, 250=0.01%
    lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01%
    lat (msec) : 2=0.01%
    cpu : usr=4.56%, sys=20.50%, ctx=8663770, majf=0, minf=37
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued : total=r=8663640/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=32

    Run status group 0 (all jobs):
    READ: io=33842MB, aggrb=577566KB/s, minb=577566KB/s, maxb=577566KB/s, mint=60001msec, maxt=60001msec

    Hier nun meine zwei Fragen und eine Bitte!

    1. Wieso ist die Performance deutlich besser m. disk cache = off und hätte es einen Nachteil, wenn ich es dabei belasse?
    2. Wie schätzen die erfahrenen Proxmox User die Performance ein. Also passt die Leistung zum Setup o. erreiche ich nur einen Bruchteil von der möglichen Leistung?
    3. habt ihr eventuell noch Vorschläge ;)

    Danke für eure Hilfe!
    LG Franz
     
    #1 franz78, Oct 11, 2018
    Last edited: Oct 13, 2018
  2. franz78

    franz78 New Member

    Joined:
    Oct 11, 2018
    Messages:
    5
    Likes Received:
    0
    Guten Abend,

    hat echt keiner eine Meinung dazu ?
    Gerade Frage Nummer 1 - wäre echt interessant.

    LG
     
    #2 franz78, Oct 13, 2018
    Last edited: Oct 13, 2018
  3. LnxBil

    LnxBil Well-Known Member

    Joined:
    Feb 21, 2015
    Messages:
    3,158
    Likes Received:
    264
    Der Nachteil ist Konsistenz :p

    Früher [tm] (also bei Festplatten) hat man den Cache immer ausgeschaltet, da man sicher sein wollte, dass die Daten dort auch wirklich persistent drauf geschrieben werden, aber bei modernen Enterprise SSDs muss man sich da nicht mehr explizit darum kümmern, denn intern ist der Cache nicht mehr so flüchtig wie das früher war. Was moderne SSDs da machen ist meistens von Produkt zu Produkt unterschiedlich, sodass ich dir nicht sagen kann was dort wirklich abgeht.
     
  4. franz78

    franz78 New Member

    Joined:
    Oct 11, 2018
    Messages:
    5
    Likes Received:
    0
    Danke für deine Antwort, aber was genau meinst du mit "der Nachteil ist Konsistenz" wo würde ich die verlieren.?

    Wenn ich auf der Platte direkt den "write cache" mit hdparm -W0 abschalte sollte ich eigentlich kein Konsistenz Problem haben.

    Oder würde es im Fall der samsung sm863a bedeuten, der PLP Schutz ist weg ?

    Fragen über Fragen ;)

    LG
     
  5. fpausp

    fpausp Member

    Joined:
    Aug 31, 2010
    Messages:
    221
    Likes Received:
    4
    #5 fpausp, Oct 15, 2018
    Last edited: Oct 15, 2018
    HBO likes this.
  6. LnxBil

    LnxBil Well-Known Member

    Joined:
    Feb 21, 2015
    Messages:
    3,158
    Likes Received:
    264
    Ne, du verlierst sie nicht, du gewinnst sie.
    Cache ist grundsätzlich für Performancesteigerung da und Kontraproduktiv für Konsistenz. Ich habe aber bei SSDs auch den gemessenen Eindruck, dass ohne Cache ein besserer Throughput gewährleistet ist bei random I/O - das ist etwas "counter intuitive", aber deckt sich mit deinen und meinen Messungen.
     
  7. LnxBil

    LnxBil Well-Known Member

    Joined:
    Feb 21, 2015
    Messages:
    3,158
    Likes Received:
    264
  8. franz78

    franz78 New Member

    Joined:
    Oct 11, 2018
    Messages:
    5
    Likes Received:
    0
    ok, also doch kein Denkfehler ;)
    Hattest du bei deinen Tests auch mal einen Verlust - ich z.b. konnte nichts feststellen?
    Bei mir ist es durchgängig ein plus von 18 -22% und das ist eigentlich, recht viel.
     
  9. LnxBil

    LnxBil Well-Known Member

    Joined:
    Feb 21, 2015
    Messages:
    3,158
    Likes Received:
    264
    Hier mal ein Test, der aber vergleichbar ist:

    Ausgeschaltetem Disk-Cache:

    Code:
    Job                                                |      R  E  A  D     |    W  R  I  T  E    |
    Description                                        |     IOPS |  Latency |     IOPS |  Latency |
    ================================================================================================
    8K Blocks, 1 Thread(s), random RW, 100 % Read      |    37192 |      215 |        0 |        0 |
    8K Blocks, 1 Thread(s), random RW, 95 % Read       |    28001 |      276 |     1473 |      174 |
    8K Blocks, 1 Thread(s), random RW, 50 % Read       |     7614 |      877 |     7592 |      171 |
    8K Blocks, 1 Thread(s), random RW, 20 % Read       |     2899 |     1808 |    11585 |      235 |
    8K Blocks, 1 Thread(s), random RW, 0 % Read        |        0 |        0 |    11812 |      674 |
    ------------------------------------------------------------------------------------------------
    8K Blocks, 2 Thread(s), random RW, 100 % Read      |    42924 |      372 |        0 |        0 |
    8K Blocks, 2 Thread(s), random RW, 95 % Read       |    33564 |      462 |     1774 |      277 |
    8K Blocks, 2 Thread(s), random RW, 50 % Read       |     9563 |     1433 |     9546 |      237 |
    8K Blocks, 2 Thread(s), random RW, 20 % Read       |     2952 |     2039 |    11816 |      842 |
    8K Blocks, 2 Thread(s), random RW, 0 % Read        |        0 |        0 |    10747 |     1486 |
    ------------------------------------------------------------------------------------------------
    8K Blocks, 4 Thread(s), random RW, 100 % Read      |    49839 |      642 |        0 |        0 |
    8K Blocks, 4 Thread(s), random RW, 95 % Read       |    34990 |      879 |     1867 |      655 |
    8K Blocks, 4 Thread(s), random RW, 50 % Read       |     8483 |     2434 |     8458 |     1339 |
    8K Blocks, 4 Thread(s), random RW, 20 % Read       |     1715 |     2682 |     6833 |     4006 |
    8K Blocks, 4 Thread(s), random RW, 0 % Read        |        0 |        0 |     5935 |     5388 |
    ------------------------------------------------------------------------------------------------
    8K Blocks, 8 Thread(s), random RW, 100 % Read      |    48714 |     1313 |        0 |        0 |
    8K Blocks, 8 Thread(s), random RW, 95 % Read       |    33870 |     1816 |     1826 |     1350 |
    8K Blocks, 8 Thread(s), random RW, 50 % Read       |     6839 |     3257 |     6828 |     6106 |
    8K Blocks, 8 Thread(s), random RW, 20 % Read       |     1816 |     3473 |     7258 |     7945 |
    8K Blocks, 8 Thread(s), random RW, 0 % Read        |        0 |        0 |     7370 |     8681 |
    ------------------------------------------------------------------------------------------------
    64K Blocks, 1 Thread, sequential RW, 100% Read     |     5644 |     1417 |        0 |        0 |
    64K Blocks, 1 Thread, sequential RW, 0% Read       |        0 |        0 |     2580 |     3075 |
    Eingeschaltetem Disk-Cache:

    Code:
    Job                                                |      R  E  A  D     |    W  R  I  T  E    |
    Description                                        |     IOPS |  Latency |     IOPS |  Latency |
    ================================================================================================
    8K Blocks, 1 Thread(s), random RW, 100 % Read      |    37633 |      212 |        0 |        0 |
    8K Blocks, 1 Thread(s), random RW, 95 % Read       |    26631 |      291 |     1402 |      168 |
    8K Blocks, 1 Thread(s), random RW, 50 % Read       |     9296 |      687 |     9285 |      172 |
    8K Blocks, 1 Thread(s), random RW, 20 % Read       |     3960 |     1217 |    15814 |      199 |
    8K Blocks, 1 Thread(s), random RW, 0 % Read        |        0 |        0 |    18055 |      441 |
    ------------------------------------------------------------------------------------------------
    8K Blocks, 2 Thread(s), random RW, 100 % Read      |    42655 |      375 |        0 |        0 |
    8K Blocks, 2 Thread(s), random RW, 95 % Read       |    31081 |      500 |     1646 |      266 |
    8K Blocks, 2 Thread(s), random RW, 50 % Read       |    12299 |     1026 |    12266 |      273 |
    8K Blocks, 2 Thread(s), random RW, 20 % Read       |     4572 |     1769 |    18255 |      431 |
    8K Blocks, 2 Thread(s), random RW, 0 % Read        |        0 |        0 |    16287 |      980 |
    ------------------------------------------------------------------------------------------------
    8K Blocks, 4 Thread(s), random RW, 100 % Read      |    50317 |      636 |        0 |        0 |
    8K Blocks, 4 Thread(s), random RW, 95 % Read       |    38085 |      805 |     2029 |      647 |
    8K Blocks, 4 Thread(s), random RW, 50 % Read       |    14203 |     1577 |    14217 |      673 |
    8K Blocks, 4 Thread(s), random RW, 20 % Read       |     3721 |     2303 |    14857 |     1575 |
    8K Blocks, 4 Thread(s), random RW, 0 % Read        |        0 |        0 |    13667 |     2339 |
    ------------------------------------------------------------------------------------------------
    8K Blocks, 8 Thread(s), random RW, 100 % Read      |    48149 |     1328 |        0 |        0 |
    8K Blocks, 8 Thread(s), random RW, 95 % Read       |    38745 |     1583 |     2080 |     1259 |
    8K Blocks, 8 Thread(s), random RW, 50 % Read       |    13601 |     1994 |    13567 |     2715 |
    8K Blocks, 8 Thread(s), random RW, 20 % Read       |     3644 |     2605 |    14533 |     3748 |
    8K Blocks, 8 Thread(s), random RW, 0 % Read        |        0 |        0 |    13428 |     4763 |
    ------------------------------------------------------------------------------------------------
    64K Blocks, 1 Thread, sequential RW, 100% Read     |     5457 |     1466 |        0 |        0 |
    64K Blocks, 1 Thread, sequential RW, 0% Read       |        0 |        0 |     2576 |     3082 |
     
  10. franz78

    franz78 New Member

    Joined:
    Oct 11, 2018
    Messages:
    5
    Likes Received:
    0
    cool, danke!

    Welches Tool hast du verwendet und mit welchem Auruf ? Dann würde ich es gerne gegenüberstellen.

    Ist das ein reiner SSD Pool bzw. was sind das für Platten?

    Danke
    LG
     
  11. fpausp

    fpausp Member

    Joined:
    Aug 31, 2010
    Messages:
    221
    Likes Received:
    4
    Danke, hab ich durch Zufall entdeckt...
     
  12. LnxBil

    LnxBil Well-Known Member

    Joined:
    Feb 21, 2015
    Messages:
    3,158
    Likes Received:
    264
    fio (ein Lauf pro Zeile) randrw mit entsprechenden rwmixread und mit einem eigenen Tool die Ausgabe geparst.

    Ja, sind zwei SSDs, 100 GB in einem Fujitsu Server RX200 (Test ist schon etwas älter). Weitere Spezifikationen hab ich keine mehr, waren aber Intel Enterprise, Read-Intensive, also DWPD von irgendwas zwischen 1-5.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice