Hallo Zusammen
So, habe am Weekend nochmals einige Versuche gemacht am Server.
Als erstes habe ich den smartpath auf dem HW Raid Controller deaktiviert und den Controller Cache aktiviert.
Anschliessend nochmals ein Test mit den bereits verbauten WD SSD's gemacht, da sind die Werte nun um Welten besser.
Sicherlich nicht auf deinem Top Niveau aber viel besser als zuvor.
Vorher waren wir bei 99th bei rund 9000ms jetzt sind es noch 34ms.
Der Raid Controller hat einen Stützkondensator verbaut, somit sollte es sich bei einem Stromausfall mit dem Datenverlust in Grenzen halten.
USV ist auch noch vorhanden für den gesamten Server.
SSD's an HW Controller
Code:
actual test time: 180.00s
thread count: 8
CPU | Usage | User | Kernel | Idle
----------------------------------------
0| 70.21%| 6.64%| 63.57%| 29.79%
1| 67.67%| 6.98%| 60.69%| 32.33%
2| 66.26%| 6.33%| 59.93%| 33.74%
3| 68.30%| 7.10%| 61.20%| 31.70%
4| 65.67%| 5.83%| 59.84%| 34.33%
5| 65.07%| 5.64%| 59.43%| 34.93%
6| 64.41%| 5.59%| 58.82%| 35.59%
7| 68.78%| 6.61%| 62.17%| 31.22%
----------------------------------------
avg.| 67.05%| 6.34%| 60.71%| 32.95%
Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 2311389184 | 564304 | 12.25 | 3135.02 | 19.897 | 40.619 | testfile.dat (10GiB)
1 | 2316943360 | 565660 | 12.28 | 3142.56 | 19.908 | 40.511 | testfile.dat (10GiB)
2 | 2325364736 | 567716 | 12.32 | 3153.98 | 19.881 | 40.710 | testfile.dat (10GiB)
3 | 2337681408 | 570723 | 12.39 | 3170.69 | 19.794 | 40.576 | testfile.dat (10GiB)
4 | 2363965440 | 577140 | 12.52 | 3206.34 | 19.616 | 40.350 | testfile.dat (10GiB)
5 | 2366550016 | 577771 | 12.54 | 3209.84 | 19.618 | 40.237 | testfile.dat (10GiB)
6 | 2374979584 | 579829 | 12.58 | 3221.27 | 19.573 | 40.174 | testfile.dat (10GiB)
7 | 2371223552 | 578912 | 12.56 | 3216.18 | 19.597 | 40.065 | testfile.dat (10GiB)
-----------------------------------------------------------------------------------------------------
total: 18768097280 | 4582055 | 99.44 | 25455.88 | 19.734 | 40.404
Read IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 1388630016 | 339021 | 7.36 | 1883.45 | 18.409 | 38.172 | testfile.dat (10GiB)
1 | 1392107520 | 339870 | 7.38 | 1888.17 | 18.347 | 36.527 | testfile.dat (10GiB)
2 | 1394880512 | 340547 | 7.39 | 1891.93 | 18.443 | 38.399 | testfile.dat (10GiB)
3 | 1401798656 | 342236 | 7.43 | 1901.31 | 18.305 | 38.415 | testfile.dat (10GiB)
4 | 1418280960 | 346260 | 7.51 | 1923.67 | 18.136 | 35.464 | testfile.dat (10GiB)
5 | 1419714560 | 346610 | 7.52 | 1925.61 | 18.136 | 38.207 | testfile.dat (10GiB)
6 | 1424769024 | 347844 | 7.55 | 1932.47 | 18.033 | 36.340 | testfile.dat (10GiB)
7 | 1420431360 | 346785 | 7.53 | 1926.58 | 18.165 | 37.968 | testfile.dat (10GiB)
-----------------------------------------------------------------------------------------------------
total: 11260612608 | 2749173 | 59.66 | 15273.20 | 18.246 | 37.448
Write IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 922759168 | 225283 | 4.89 | 1251.57 | 22.135 | 43.951 | testfile.dat (10GiB)
1 | 924835840 | 225790 | 4.90 | 1254.39 | 22.256 | 45.760 | testfile.dat (10GiB)
2 | 930484224 | 227169 | 4.93 | 1262.05 | 22.036 | 43.859 | testfile.dat (10GiB)
3 | 935882752 | 228487 | 4.96 | 1269.37 | 22.023 | 43.519 | testfile.dat (10GiB)
4 | 945684480 | 230880 | 5.01 | 1282.67 | 21.837 | 46.642 | testfile.dat (10GiB)
5 | 946835456 | 231161 | 5.02 | 1284.23 | 21.840 | 43.007 | testfile.dat (10GiB)
6 | 950210560 | 231985 | 5.03 | 1288.81 | 21.883 | 45.222 | testfile.dat (10GiB)
7 | 950792192 | 232127 | 5.04 | 1289.60 | 21.736 | 42.920 | testfile.dat (10GiB)
-----------------------------------------------------------------------------------------------------
total: 7507484672 | 1832882 | 39.78 | 10182.69 | 21.966 | 44.377
Total latency distribution:
%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | 0.237 | 0.189 | 0.189
25th | 13.273 | 17.596 | 15.428
50th | 16.676 | 19.494 | 18.035
75th | 19.690 | 21.108 | 20.521
90th | 21.542 | 22.249 | 21.899
95th | 22.409 | 23.052 | 22.719
99th | 30.504 | 37.548 | 33.272
3-nines | 348.248 | 361.651 | 350.606
4-nines | 1304.714 | 2849.313 | 2839.102
5-nines | 2850.376 | 2850.477 | 2850.442
6-nines | 2851.338 | 2850.907 | 2851.070
7-nines | 2852.045 | 2851.000 | 2852.045
8-nines | 2852.045 | 2851.000 | 2852.045
9-nines | 2852.045 | 2851.000 | 2852.045
max | 2852.045 | 2851.000 | 2852.045
Anschliessend habe ich noch die beiden NVME direkt auf das Mainboard installiert (mit Kühlkörper

) und in Proxmox ein ZFS Raid (Mirror) erstellt.
Im Anschliessenden Test fällt die Performance aber recht ab im direkten Vergleich mit den verbauten SSD's welche am HW Controller hängen.
Sofern ich das richtig interpretiert habe.
Sind Enterprise NVME aber ohne PLP gemäss Specs.
Eigentlich schon verwunderlich das direkt durchgereichte Speicher jetzt in diesem fall einen 60% schlechteren Durchsatz haben als welche die an einem HW Controller hängen und eigentlich 6x mehr Lese und Schreibrate haben sollten.
NVME direkt auf Mainboard
Code:
actual test time: 180.00s
thread count: 8
CPU | Usage | User | Kernel | Idle
----------------------------------------
0| 52.09%| 4.31%| 47.79%| 47.91%
1| 41.52%| 5.61%| 35.91%| 58.48%
2| 44.12%| 11.88%| 32.25%| 55.88%
3| 33.82%| 6.57%| 27.25%| 66.18%
4| 29.47%| 5.36%| 24.11%| 70.53%
5| 42.55%| 3.19%| 39.37%| 57.45%
6| 48.04%| 4.59%| 43.45%| 51.96%
7| 34.18%| 8.03%| 26.15%| 65.82%
----------------------------------------
avg.| 40.72%| 6.19%| 34.53%| 59.28%
Total IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 1879236608 | 458798 | 9.96 | 2548.88 | 12.955 | 106.924 | testfile.dat (10GiB)
1 | 989732864 | 241634 | 5.24 | 1342.41 | 47.533 | 1417.086 | testfile.dat (10GiB)
2 | 38584320 | 9420 | 0.20 | 52.33 | 103.502 | 478.564 | testfile.dat (10GiB)
3 | 351563776 | 85831 | 1.86 | 476.84 | 85.550 | 2674.254 | testfile.dat (10GiB)
4 | 281952256 | 68836 | 1.49 | 382.42 | 99.363 | 2913.338 | testfile.dat (10GiB)
5 | 1635368960 | 399260 | 8.66 | 2218.11 | 18.518 | 299.237 | testfile.dat (10GiB)
6 | 1636773888 | 399603 | 8.67 | 2220.02 | 25.666 | 620.102 | testfile.dat (10GiB)
7 | 330625024 | 80719 | 1.75 | 448.44 | 142.807 | 3876.762 | testfile.dat (10GiB)
-----------------------------------------------------------------------------------------------------
total: 7143837696 | 1744101 | 37.85 | 9689.45 | 35.413 | 1332.159
Read IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 1128914944 | 275614 | 5.98 | 1531.19 | 10.905 | 124.842 | testfile.dat (10GiB)
1 | 595714048 | 145438 | 3.16 | 807.99 | 47.558 | 1486.692 | testfile.dat (10GiB)
2 | 23146496 | 5651 | 0.12 | 31.39 | 104.101 | 587.213 | testfile.dat (10GiB)
3 | 211042304 | 51524 | 1.12 | 286.24 | 89.122 | 2782.634 | testfile.dat (10GiB)
4 | 168640512 | 41172 | 0.89 | 228.73 | 80.992 | 2568.111 | testfile.dat (10GiB)
5 | 980836352 | 239462 | 5.20 | 1330.34 | 16.707 | 335.418 | testfile.dat (10GiB)
6 | 981925888 | 239728 | 5.20 | 1331.82 | 23.169 | 625.237 | testfile.dat (10GiB)
7 | 197918720 | 48320 | 1.05 | 268.44 | 141.475 | 3903.848 | testfile.dat (10GiB)
-----------------------------------------------------------------------------------------------------
total: 4288139264 | 1046909 | 22.72 | 5816.16 | 33.267 | 1331.605
Write IO
thread | bytes | I/Os | MiB/s | I/O per s | AvgLat | LatStdDev | file
-----------------------------------------------------------------------------------------------------
0 | 750321664 | 183184 | 3.98 | 1017.69 | 16.041 | 71.893 | testfile.dat (10GiB)
1 | 394018816 | 96196 | 2.09 | 534.42 | 47.495 | 1304.818 | testfile.dat (10GiB)
2 | 15437824 | 3769 | 0.08 | 20.94 | 102.604 | 235.387 | testfile.dat (10GiB)
3 | 140521472 | 34307 | 0.74 | 190.59 | 80.185 | 2502.672 | testfile.dat (10GiB)
4 | 113311744 | 27664 | 0.60 | 153.69 | 126.703 | 3361.939 | testfile.dat (10GiB)
5 | 654532608 | 159798 | 3.47 | 887.77 | 21.231 | 234.780 | testfile.dat (10GiB)
6 | 654848000 | 159875 | 3.47 | 888.19 | 29.410 | 612.301 | testfile.dat (10GiB)
7 | 132706304 | 32399 | 0.70 | 179.99 | 144.794 | 3836.009 | testfile.dat (10GiB)
-----------------------------------------------------------------------------------------------------
total: 2855698432 | 697192 | 15.13 | 3873.29 | 38.635 | 1332.984
Total latency distribution:
%-ile | Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
min | 0.000 | 1.840 | 0.000
25th | 0.546 | 5.892 | 1.353
50th | 2.802 | 9.547 | 6.703
75th | 16.971 | 24.025 | 20.176
90th | 46.075 | 51.978 | 48.600
95th | 62.846 | 69.098 | 65.492
99th | 118.741 | 132.920 | 125.404
3-nines | 806.102 | 810.405 | 807.523
4-nines | 79215.547 | 83246.234 | 79215.547
5-nines | 159118.312 | 157061.344 | 158597.453
6-nines | 165560.000 | 167091.641 | 166580.719
7-nines | 166580.719 | 167091.641 | 167091.641
8-nines | 166580.719 | 167091.641 | 167091.641
9-nines | 166580.719 | 167091.641 | 167091.641
max | 166580.719 | 167091.641 | 167091.641