TrueNAS Storage Plugin

That's a good point. NVMe seems to be using NUMA by default.
1762630965447.png

Code:
  FIO Storage Benchmark

  Running benchmark on storage: tn-nvme

FIO installation:              ✓ fio-3.39
Storage configuration:         ✓ Valid (nvme-tcp mode)
Finding available VM ID:       ✓ Using VM ID 990
Allocating 10GB test volume:   ✓ tn-nvme:vol-fio-bench-1762630899-nsf29e7fb7-dea1-44d3-bae8-bed0a2dd9244
Waiting for device (5s):       ✓ Ready
Detecting device path:         ✓ /dev/nvme3n24
Validating device is unused:   ✓ Device is safe to test

  Starting FIO benchmarks (30 tests, 25-30 minutes total)...

  Transport mode: nvme-tcp (testing QD=1, 16, 32, 64, 128)

  Sequential Read Bandwidth Tests: [1-5/30]
Queue Depth = 1:               ✓ 761.42 MB/s
Queue Depth = 16:              ✓ 3.48 GB/s
Queue Depth = 32:              ✓ 4.13 GB/s
Queue Depth = 64:              ✓ 4.34 GB/s
Queue Depth = 128:             ✓ 4.07 GB/s

  Sequential Write Bandwidth Tests: [6-10/30]
Queue Depth = 1:               ✓ 507.10 MB/s
Queue Depth = 16:              ✓ 397.86 MB/s
Queue Depth = 32:              ✓ 380.13 MB/s
Queue Depth = 64:              ✓ 381.88 MB/s
Queue Depth = 128:             ✓ 375.21 MB/s

  Random Read IOPS Tests: [11-15/30]
Queue Depth = 1:               ✓ 6,095 IOPS
Queue Depth = 16:              ✓ 63,947 IOPS
Queue Depth = 32:              ✓ 67,264 IOPS
Queue Depth = 64:              ✓ 68,497 IOPS
Queue Depth = 128:             ✓ 67,753 IOPS

  Random Write IOPS Tests: [16-20/30]
Queue Depth = 1:               ✓ 3,808 IOPS
Queue Depth = 16:              ✓ 3,730 IOPS
Queue Depth = 32:              ✓ 3,571 IOPS
Queue Depth = 64:              ✓ 3,527 IOPS
Queue Depth = 128:             ✓ 3,549 IOPS

  Random Read Latency Tests: [21-25/30]
Queue Depth = 1:               ✓ 135.18 µs
Queue Depth = 16:              ✓ 249.72 µs
Queue Depth = 32:              ✓ 479.23 µs
Queue Depth = 64:              ✓ 954.86 µs
Queue Depth = 128:             ✓ 1.87 ms

  Mixed 70/30 Workload Tests: [26-30/30]
Queue Depth = 1:               ✓ R: 4,204 / W: 1,799 IOPS
Queue Depth = 16:              ✓ R: 9,917 / W: 4,260 IOPS
Queue Depth = 32:              ✓ R: 8,228 / W: 3,531 IOPS
Queue Depth = 64:              ✓ R: 8,300 / W: 3,562 IOPS
Queue Depth = 128:             ✓ R: 8,022 / W: 3,444 IOPS

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  Benchmark Summary
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

  Total tests run: 30
  Completed: 30

  Top Performers:

  Sequential Read:                  4.34 GB/s   (QD=64 )
  Sequential Write:               507.10 MB/s   (QD=1  )
  Random Read IOPS:               68,497 IOPS   (QD=64 )
  Random Write IOPS:               3,808 IOPS   (QD=1  )
  Lowest Latency:                  135.18 µs   (QD=1  )

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━



Here is with round-robin:
1762631839252.png
Code:
 FIO Storage Benchmark

  Running benchmark on storage: tn-nvme

FIO installation:              ✓ fio-3.39
Storage configuration:         ✓ Valid (nvme-tcp mode)
Finding available VM ID:       ✓ Using VM ID 990
Allocating 10GB test volume:   ✓ tn-nvme:vol-fio-bench-1762631789-ns6edcf3c3-724c-4c25-b12b-6b35d5543f1a
Waiting for device (5s):       ✓ Ready
Detecting device path:         ✓ /dev/nvme3n25
Validating device is unused:   ✓ Device is safe to test

  Starting FIO benchmarks (30 tests, 25-30 minutes total)...

  Transport mode: nvme-tcp (testing QD=1, 16, 32, 64, 128)

  Sequential Read Bandwidth Tests: [1-5/30]
Queue Depth = 1:               ✓ 528.17 MB/s
Queue Depth = 16:              ✓ 2.88 GB/s
Queue Depth = 32:              ✓ 1.86 GB/s
Queue Depth = 64:              ✓ 2.08 GB/s
Queue Depth = 128:             ✓ 2.73 GB/s

  Sequential Write Bandwidth Tests: [6-10/30]
Queue Depth = 1:               ✓ 360.35 MB/s
Queue Depth = 16:              ✓ 340.29 MB/s
Queue Depth = 32:              ✓ 377.58 MB/s
Queue Depth = 64:              ✓ 378.55 MB/s
Queue Depth = 128:             ✓ 352.34 MB/s

  Random Read IOPS Tests: [11-15/30]
Queue Depth = 1:               ✓ 5,613 IOPS
Queue Depth = 16:              ✓ 64,160 IOPS
Queue Depth = 32:              ✓ 66,288 IOPS
Queue Depth = 64:              ✓ 66,266 IOPS
Queue Depth = 128:             ✓ 68,716 IOPS

  Random Write IOPS Tests: [16-20/30]
Queue Depth = 1:               ✓ 2,738 IOPS
Queue Depth = 16:              ✓ 4,429 IOPS
Queue Depth = 32:              ✓ 4,448 IOPS
Queue Depth = 64:              ✓ 2,855 IOPS
Queue Depth = 128:             ✓ 2,804 IOPS

  Random Read Latency Tests: [21-25/30]
Queue Depth = 1:               ✓ 187.71 µs
Queue Depth = 16:              ✓ 251.39 µs
Queue Depth = 32:              ✓ 477.93 µs
Queue Depth = 64:              ✓ 977.68 µs
Queue Depth = 128:             ✓ 1.88 ms

  Mixed 70/30 Workload Tests: [26-30/30]
Queue Depth = 1:               ✓ R: 3,228 / W: 1,380 IOPS
Queue Depth = 16:              ✓ R: 11,856 / W: 5,097 IOPS
Queue Depth = 32:              ✓ R: 10,289 / W: 4,417 IOPS
Queue Depth = 64:              ✓ R: 10,191 / W: 4,377 IOPS
Queue Depth = 128:             ✓ R: 10,558 / W: 4,534 IOPS

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  Benchmark Summary
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

  Total tests run: 30
  Completed: 30

  Top Performers:

  Sequential Read:                  2.88 GB/s   (QD=16 )
  Sequential Write:               378.55 MB/s   (QD=64 )
  Random Read IOPS:               68,716 IOPS   (QD=128)
  Random Write IOPS:               4,448 IOPS   (QD=32 )
  Lowest Latency:                  187.71 µs   (QD=1  )

So overall, worse with round-robbin. Only thing that;s a tiny tiny bit better is the write IOPS.

I'll see if there's some simple tuning that needs to be done on PVE.
 
Added a secret menu option in the Diagnostics menu. Instead of using the option 4 for FIO, do 4+ and it'll load up the test with multi-jobs enabled.

Here's what that looks like:

NVMe:
Code:
    FIO Storage Benchmark

  Running benchmark on storage: tn-nvme

FIO installation:              ✓ fio-3.39
Storage configuration:         ✓ Valid (nvme-tcp mode)
Finding available VM ID:       ✓ Using VM ID 990
Allocating 10GB test volume:   ✓ tn-nvme:vol-fio-bench-1762633721-ns934eedd0-4722-4172-a9e0-3c0b4182ec6e
Waiting for device (5s):       ✓ Ready
Detecting device path:         ✓ /dev/nvme3n27
Validating device is unused:   ✓ Device is safe to test

  Starting FIO benchmarks (90 tests, 75-90 minutes total)...

  Transport mode: nvme-tcp (testing QD=1, 16, 32, 64, 128)
  Extended mode: Testing each QD with numjobs=1, 4, 8

  Sequential Read Bandwidth Tests: [1-15/90]
Queue Depth = 1 (jobs=1):      ✓ 501.95 MB/s
Queue Depth = 16 (jobs=1):     ✓ 2.88 GB/s
Queue Depth = 32 (jobs=1):     ✓ 2.08 GB/s
Queue Depth = 64 (jobs=1):     ✓ 2.04 GB/s
Queue Depth = 128 (jobs=1):    ✓ 2.62 GB/s
Queue Depth = 1 (jobs=4):      ✓ 2.32 GB/s
Queue Depth = 16 (jobs=4):     ✓ 1.80 GB/s
Queue Depth = 32 (jobs=4):     ✓ 2.46 GB/s
Queue Depth = 64 (jobs=4):     ✓ 2.77 GB/s
Queue Depth = 128 (jobs=4):    ✓ 3.11 GB/s
Queue Depth = 1 (jobs=8):      ✓ 3.66 GB/s
Queue Depth = 16 (jobs=8):     ✓ 2.64 GB/s
Queue Depth = 32 (jobs=8):     ✓ 2.71 GB/s
Queue Depth = 64 (jobs=8):     ✓ 3.09 GB/s
Queue Depth = 128 (jobs=8):    ✓ 3.55 GB/s

  Sequential Write Bandwidth Tests: [16-30/90]
Queue Depth = 1 (jobs=1):      ✓ 489.00 MB/s
Queue Depth = 16 (jobs=1):     ✓ 408.85 MB/s
Queue Depth = 32 (jobs=1):     ✓ 380.53 MB/s
Queue Depth = 64 (jobs=1):     ✓ 370.86 MB/s
Queue Depth = 128 (jobs=1):    ✓ 382.67 MB/s
Queue Depth = 1 (jobs=4):      ✓ 1.40 GB/s
Queue Depth = 16 (jobs=4):     ✓ 908.16 MB/s
Queue Depth = 32 (jobs=4):     ✓ 785.22 MB/s
Queue Depth = 64 (jobs=4):     ✓ 809.60 MB/s
Queue Depth = 128 (jobs=4):    ✓ 863.03 MB/s
Queue Depth = 1 (jobs=8):      ✓ 2.60 GB/s
Queue Depth = 16 (jobs=8):     ✓ 1.55 GB/s
Queue Depth = 32 (jobs=8):     ✓ 964.94 MB/s
Queue Depth = 64 (jobs=8):     ✓ 1.09 GB/s
Queue Depth = 128 (jobs=8):    ✓ 921.51 MB/s

  Random Read IOPS Tests: [31-45/90]
Queue Depth = 1 (jobs=1):      ✓ 5,595 IOPS
Queue Depth = 16 (jobs=1):     ✓ 62,547 IOPS
Queue Depth = 32 (jobs=1):     ✓ 65,903 IOPS
Queue Depth = 64 (jobs=1):     ✓ 69,019 IOPS
Queue Depth = 128 (jobs=1):    ✓ 68,284 IOPS
Queue Depth = 1 (jobs=4):      ✓ 29,528 IOPS
Queue Depth = 16 (jobs=4):     ✓ 173,083 IOPS
Queue Depth = 32 (jobs=4):     ✓ 182,997 IOPS
Queue Depth = 64 (jobs=4):     ✓ 181,422 IOPS
Queue Depth = 128 (jobs=4):    ✓ 185,151 IOPS
Queue Depth = 1 (jobs=8):      ✓ 58,366 IOPS
Queue Depth = 16 (jobs=8):     ✓ 157,542 IOPS
Queue Depth = 32 (jobs=8):     ✓ 176,590 IOPS
Queue Depth = 64 (jobs=8):     ✓ 189,980 IOPS
Queue Depth = 128 (jobs=8):    ✓ 203,380 IOPS

  Random Write IOPS Tests: [46-60/90]
Queue Depth = 1 (jobs=1):      ✓ 3,777 IOPS
Queue Depth = 16 (jobs=1):     ✓ 2,373 IOPS
Queue Depth = 32 (jobs=1):     ✓ 3,268 IOPS
Queue Depth = 64 (jobs=1):     ✓ 3,628 IOPS
Queue Depth = 128 (jobs=1):    ✓ 3,655 IOPS
Queue Depth = 1 (jobs=4):      ✓ 3,649 IOPS
Queue Depth = 16 (jobs=4):     ✓ 3,667 IOPS
Queue Depth = 32 (jobs=4):     ✓ 3,615 IOPS
Queue Depth = 64 (jobs=4):     ✓ 3,278 IOPS
Queue Depth = 128 (jobs=4):    ✓ 3,571 IOPS
Queue Depth = 1 (jobs=8):      ✓ 3,568 IOPS
Queue Depth = 16 (jobs=8):     ✓ 3,532 IOPS
Queue Depth = 32 (jobs=8):     ✓ 3,546 IOPS
Queue Depth = 64 (jobs=8):     ✓ 3,425 IOPS
Queue Depth = 128 (jobs=8):    ✓ 3,430 IOPS

  Random Read Latency Tests: [61-75/90]
Queue Depth = 1 (jobs=1):      ✓ 180.82 µs
Queue Depth = 16 (jobs=1):     ✓ 252.90 µs
Queue Depth = 32 (jobs=1):     ✓ 480.20 µs
Queue Depth = 64 (jobs=1):     ✓ 952.86 µs
Queue Depth = 128 (jobs=1):    ✓ 1.87 ms
Queue Depth = 1 (jobs=4):      ✓ 137.91 µs
Queue Depth = 16 (jobs=4):     ✓ 370.55 µs
Queue Depth = 32 (jobs=4):     ✓ 735.75 µs
Queue Depth = 64 (jobs=4):     ✓ 1.41 ms
Queue Depth = 128 (jobs=4):    ✓ 2.68 ms
Queue Depth = 1 (jobs=8):      ✓ 134.96 µs
Queue Depth = 16 (jobs=8):     ✓ 810.71 µs
Queue Depth = 32 (jobs=8):     ✓ 1.46 ms
Queue Depth = 64 (jobs=8):     ✓ 2.67 ms
Queue Depth = 128 (jobs=8):    ✓ 5.03 ms

  Mixed 70/30 Workload Tests: [76-90/90]
Queue Depth = 1 (jobs=1):      ✓ R: 3,779 / W: 1,617 IOPS
Queue Depth = 16 (jobs=1):     ✓ R: 9,769 / W: 4,196 IOPS
Queue Depth = 32 (jobs=1):     ✓ R: 8,397 / W: 3,604 IOPS
Queue Depth = 64 (jobs=1):     ✓ R: 8,452 / W: 3,627 IOPS
Queue Depth = 128 (jobs=1):    ✓ R: 8,165 / W: 3,503 IOPS
Queue Depth = 1 (jobs=4):      ✓ R: 8,267 / W: 3,546 IOPS
Queue Depth = 16 (jobs=4):     ✓ R: 8,617 / W: 3,698 IOPS
Queue Depth = 32 (jobs=4):     ✓ R: 8,441 / W: 3,623 IOPS
Queue Depth = 64 (jobs=4):     ✓ R: 8,424 / W: 3,616 IOPS
Queue Depth = 128 (jobs=4):    ✓ R: 8,201 / W: 3,518 IOPS
Queue Depth = 1 (jobs=8):      ✓ R: 8,284 / W: 3,559 IOPS
Queue Depth = 16 (jobs=8):     ✓ R: 8,128 / W: 3,492 IOPS
Queue Depth = 32 (jobs=8):     ✓ R: 7,337 / W: 3,154 IOPS
Queue Depth = 64 (jobs=8):     ✓ R: 5,548 / W: 2,382 IOPS
Queue Depth = 128 (jobs=8):    ✓ R: 5,704 / W: 2,449 IOPS

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  Benchmark Summary
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

  Total tests run: 90
  Completed: 90


  Top Performers (numjobs=1):

  Sequential Read:                  2.88 GB/s   (QD=16 )
  Sequential Write:               489.00 MB/s   (QD=1  )
  Random Read IOPS:               69,019 IOPS   (QD=64 )
  Random Write IOPS:               3,777 IOPS   (QD=1  )
  Lowest Latency:                  180.82 µs   (QD=1  )

  Top Performers (numjobs=4):

  Sequential Read:                  3.11 GB/s   (QD=128)
  Sequential Write:                 1.40 GB/s   (QD=1  )
  Random Read IOPS:              185,151 IOPS   (QD=128)
  Random Write IOPS:               3,667 IOPS   (QD=16 )
  Lowest Latency:                  137.91 µs   (QD=1  )

  Top Performers (numjobs=8):

  Sequential Read:                  3.66 GB/s   (QD=1  )
  Sequential Write:                 2.60 GB/s   (QD=1  )
  Random Read IOPS:              203,380 IOPS   (QD=128)
  Random Write IOPS:               3,568 IOPS   (QD=1  )
  Lowest Latency:                  134.96 µs   (QD=1  )

iSCSI:
Code:
 FIO Storage Benchmark

  Running benchmark on storage: tn-iscsi

FIO installation:              ✓ fio-3.39
Storage configuration:         ✓ Valid (iscsi mode)
Finding available VM ID:       ✓ Using VM ID 990
Allocating 10GB test volume:   ✓ tn-iscsi:vol-fio-bench-1762636114-lun7
Waiting for device (5s):       ✓ Ready
Detecting device path:         ✓ /dev/mapper/mpathh
Validating device is unused:   ✓ Device is safe to test

  Starting FIO benchmarks (90 tests, 75-90 minutes total)...

  Transport mode: iscsi (testing QD=1, 16, 32, 64, 128)
  Extended mode: Testing each QD with numjobs=1, 4, 8

  Sequential Read Bandwidth Tests: [1-15/90]
Queue Depth = 1 (jobs=1):      ✓ 704.06 MB/s
Queue Depth = 16 (jobs=1):     ✓ 2.30 GB/s
Queue Depth = 32 (jobs=1):     ✓ 1.94 GB/s
Queue Depth = 64 (jobs=1):     ✓ 896.26 MB/s
Queue Depth = 128 (jobs=1):    ✓ 926.56 MB/s
Queue Depth = 1 (jobs=4):      ✓ 1.85 GB/s
Queue Depth = 16 (jobs=4):     ✓ 935.37 MB/s
Queue Depth = 32 (jobs=4):     ✓ 1.08 GB/s
Queue Depth = 64 (jobs=4):     ✓ 931.36 MB/s
Queue Depth = 128 (jobs=4):    ✓ 900.10 MB/s
Queue Depth = 1 (jobs=8):      ✓ 1.51 GB/s
Queue Depth = 16 (jobs=8):     ✓ 1.00 GB/s
Queue Depth = 32 (jobs=8):     ✓ 942.34 MB/s
Queue Depth = 64 (jobs=8):     ✓ 858.03 MB/s
Queue Depth = 128 (jobs=8):    ✓ 822.88 MB/s

  Sequential Write Bandwidth Tests: [16-30/90]
Queue Depth = 1 (jobs=1):      ✓ 510.87 MB/s
Queue Depth = 16 (jobs=1):     ✓ 353.82 MB/s
Queue Depth = 32 (jobs=1):     ✓ 340.20 MB/s
Queue Depth = 64 (jobs=1):     ✓ 352.51 MB/s
Queue Depth = 128 (jobs=1):    ✓ 352.99 MB/s
Queue Depth = 1 (jobs=4):      ✓ 1.03 GB/s
Queue Depth = 16 (jobs=4):     ✓ 1.45 GB/s
Queue Depth = 32 (jobs=4):     ✓ 1.29 GB/s
Queue Depth = 64 (jobs=4):     ✓ 1.26 GB/s
Queue Depth = 128 (jobs=4):    ✓ 1.00 GB/s
Queue Depth = 1 (jobs=8):      ✓ 1.35 GB/s
Queue Depth = 16 (jobs=8):     ✓ 1.96 GB/s
Queue Depth = 32 (jobs=8):     ✓ 1.84 GB/s
Queue Depth = 64 (jobs=8):     ✓ 1.96 GB/s
Queue Depth = 128 (jobs=8):    ✓ 2.25 GB/s

  Random Read IOPS Tests: [31-45/90]
Queue Depth = 1 (jobs=1):      ✓ 7,432 IOPS
Queue Depth = 16 (jobs=1):     ✓ 88,313 IOPS
Queue Depth = 32 (jobs=1):     ✓ 80,694 IOPS
Queue Depth = 64 (jobs=1):     ✓ 78,315 IOPS
Queue Depth = 128 (jobs=1):    ✓ 75,902 IOPS
Queue Depth = 1 (jobs=4):      ✓ 36,698 IOPS
Queue Depth = 16 (jobs=4):     ✓ 98,826 IOPS
Queue Depth = 32 (jobs=4):     ✓ 94,963 IOPS
Queue Depth = 64 (jobs=4):     ✓ 99,861 IOPS
Queue Depth = 128 (jobs=4):    ✓ 95,931 IOPS
Queue Depth = 1 (jobs=8):      ✓ 60,905 IOPS
Queue Depth = 16 (jobs=8):     ✓ 96,722 IOPS
Queue Depth = 32 (jobs=8):     ✓ 97,192 IOPS
Queue Depth = 64 (jobs=8):     ✓ 95,937 IOPS
Queue Depth = 128 (jobs=8):    ✓ 97,126 IOPS

  Random Write IOPS Tests: [46-60/90]
Queue Depth = 1 (jobs=1):      ✓ 3,910 IOPS
Queue Depth = 16 (jobs=1):     ✓ 3,620 IOPS
Queue Depth = 32 (jobs=1):     ✓ 3,473 IOPS
Queue Depth = 64 (jobs=1):     ✓ 3,513 IOPS
Queue Depth = 128 (jobs=1):    ✓ 3,546 IOPS
Queue Depth = 1 (jobs=4):      ✓ 3,524 IOPS
Queue Depth = 16 (jobs=4):     ✓ 3,465 IOPS
Queue Depth = 32 (jobs=4):     ✓ 2,981 IOPS
Queue Depth = 64 (jobs=4):     ✓ 2,976 IOPS
Queue Depth = 128 (jobs=4):    ✓ 2,950 IOPS
Queue Depth = 1 (jobs=8):      ✓ 3,522 IOPS
Queue Depth = 16 (jobs=8):     ✓ 3,132 IOPS
Queue Depth = 32 (jobs=8):     ✓ 2,670 IOPS
Queue Depth = 64 (jobs=8):     ✓ 2,234 IOPS
Queue Depth = 128 (jobs=8):    ✓ 3,385 IOPS

  Random Read Latency Tests: [61-75/90]
Queue Depth = 1 (jobs=1):      ✓ 138.99 µs
Queue Depth = 16 (jobs=1):     ✓ 175.95 µs
Queue Depth = 32 (jobs=1):     ✓ 403.36 µs
Queue Depth = 64 (jobs=1):     ✓ 823.73 µs
Queue Depth = 128 (jobs=1):    ✓ 1.70 ms
Queue Depth = 1 (jobs=4):      ✓ 111.57 µs
Queue Depth = 16 (jobs=4):     ✓ 644.95 µs
Queue Depth = 32 (jobs=4):     ✓ 1.27 ms
Queue Depth = 64 (jobs=4):     ✓ 2.62 ms
Queue Depth = 128 (jobs=4):    ✓ 5.39 ms
Queue Depth = 1 (jobs=8):      ✓ 129.69 µs
Queue Depth = 16 (jobs=8):     ✓ 1.32 ms
Queue Depth = 32 (jobs=8):     ✓ 2.64 ms
Queue Depth = 64 (jobs=8):     ✓ 5.32 ms
Queue Depth = 128 (jobs=8):    ✓ 10.42 ms

  Mixed 70/30 Workload Tests: [76-90/90]
Queue Depth = 1 (jobs=1):      ✓ R: 5,009 / W: 2,142 IOPS
Queue Depth = 16 (jobs=1):     ✓ R: 5,353 / W: 2,291 IOPS
Queue Depth = 32 (jobs=1):     ✓ R: 5,851 / W: 2,510 IOPS
Queue Depth = 64 (jobs=1):     ✓ R: 6,946 / W: 2,987 IOPS
Queue Depth = 128 (jobs=1):    ✓ R: 6,906 / W: 2,970 IOPS
Queue Depth = 1 (jobs=4):      ✓ R: 4,193 / W: 1,809 IOPS
Queue Depth = 16 (jobs=4):     ✓ R: 7,077 / W: 3,044 IOPS
Queue Depth = 32 (jobs=4):     ✓ R: 8,349 / W: 3,582 IOPS
Queue Depth = 64 (jobs=4):     ✓ R: 8,272 / W: 3,549 IOPS
Queue Depth = 128 (jobs=4):    ✓ R: 7,657 / W: 3,287 IOPS
Queue Depth = 1 (jobs=8):      ✓ R: 5,870 / W: 2,523 IOPS
Queue Depth = 16 (jobs=8):     ✓ R: 7,922 / W: 3,403 IOPS
Queue Depth = 32 (jobs=8):     ✓ R: 6,900 / W: 2,969 IOPS
Queue Depth = 64 (jobs=8):     ✓ R: 7,133 / W: 3,067 IOPS
Queue Depth = 128 (jobs=8):    ✓ R: 6,811 / W: 2,929 IOPS

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  Benchmark Summary
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

  Total tests run: 90
  Completed: 90


  Top Performers (numjobs=1):

  Sequential Read:                  2.30 GB/s   (QD=16 )
  Sequential Write:               510.87 MB/s   (QD=1  )
  Random Read IOPS:               88,313 IOPS   (QD=16 )
  Random Write IOPS:               3,910 IOPS   (QD=1  )
  Lowest Latency:                  138.99 µs   (QD=1  )

  Top Performers (numjobs=4):

  Sequential Read:                  1.85 GB/s   (QD=1  )
  Sequential Write:                 1.45 GB/s   (QD=16 )
  Random Read IOPS:               99,861 IOPS   (QD=64 )
  Random Write IOPS:               3,524 IOPS   (QD=1  )
  Lowest Latency:                  111.57 µs   (QD=1  )

  Top Performers (numjobs=8):

  Sequential Read:                  1.51 GB/s   (QD=1  )
  Sequential Write:                 2.25 GB/s   (QD=128)
  Random Read IOPS:               97,192 IOPS   (QD=32 )
  Random Write IOPS:               3,522 IOPS   (QD=1  )
  Lowest Latency:                  129.69 µs   (QD=1  )

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Press Enter to return to diagnostics menu...

So write IOPS are just bad no matter what... Hmm.
 
  • Like
Reactions: quanto11
Here is the extended test on iSCSI:

FIO Storage Benchmark

Running benchmark on storage: truenas-storage

FIO installation: ✓ fio-3.39
Storage configuration: ✓ Valid (iscsi mode)
Finding available VM ID: ✓ Using VM ID 990
Allocating 10GB test volume: ✓ truenas-storage:vol-fio-bench-1762639514-lun6
Waiting for device (5s): ✓ Ready
Detecting device path: ✓ /dev/mapper/mpathq
Validating device is unused: ✓ Device is safe to test

Starting FIO benchmarks (90 tests, 75-90 minutes total)...

Transport mode: iscsi (testing QD=1, 16, 32, 64, 128)
Extended mode: Testing each QD with numjobs=1, 4, 8

Sequential Read Bandwidth Tests: [1-15/90]
Queue Depth = 1 (jobs=1): ✓ 622.02 MB/s
Queue Depth = 16 (jobs=1): ✓ 2.10 GB/s
Queue Depth = 32 (jobs=1): ✓ 2.10 GB/s
Queue Depth = 64 (jobs=1): ✓ 2.12 GB/s
Queue Depth = 128 (jobs=1): ✓ 2.12 GB/s
Queue Depth = 1 (jobs=4): ✓ 2.01 GB/s
Queue Depth = 16 (jobs=4): ✓ 2.10 GB/s
Queue Depth = 32 (jobs=4): ✓ 2.10 GB/s
Queue Depth = 64 (jobs=4): ✓ 2.13 GB/s
Queue Depth = 128 (jobs=4): ✓ 2.12 GB/s
Queue Depth = 1 (jobs=8): ✓ 2.08 GB/s
Queue Depth = 16 (jobs=8): ✓ 2.11 GB/s
Queue Depth = 32 (jobs=8): ✓ 2.10 GB/s
Queue Depth = 64 (jobs=8): ✓ 2.12 GB/s
Queue Depth = 128 (jobs=8): ✓ 2.13 GB/s

Sequential Write Bandwidth Tests: [16-30/90]
Queue Depth = 1 (jobs=1): ✓ 457.35 MB/s
Queue Depth = 16 (jobs=1): ✓ 295.18 MB/s
Queue Depth = 32 (jobs=1): ✓ 288.48 MB/s
Queue Depth = 64 (jobs=1): ✓ 317.24 MB/s
Queue Depth = 128 (jobs=1): ✓ 311.78 MB/s
Queue Depth = 1 (jobs=4): ✓ 1.13 GB/s
Queue Depth = 16 (jobs=4): ✓ 1.17 GB/s
Queue Depth = 32 (jobs=4): ✓ 1.16 GB/s
Queue Depth = 64 (jobs=4): ✓ 321.79 MB/s
Queue Depth = 128 (jobs=4): ✓ 1.22 GB/s
Queue Depth = 1 (jobs=8): ✓ 1.96 GB/s
Queue Depth = 16 (jobs=8): ✓ 2.02 GB/s
Queue Depth = 32 (jobs=8): ✓ 1.98 GB/s
Queue Depth = 64 (jobs=8): ✓ 1.94 GB/s
Queue Depth = 128 (jobs=8): ✓ 1.93 GB/s

Random Read IOPS Tests: [31-45/90]
Queue Depth = 1 (jobs=1): ✓ 4,998 IOPS
Queue Depth = 16 (jobs=1): ✓ 57,663 IOPS
Queue Depth = 32 (jobs=1): ✓ 83,757 IOPS
Queue Depth = 64 (jobs=1): ✓ 102,721 IOPS
Queue Depth = 128 (jobs=1): ✓ 99,815 IOPS
Queue Depth = 1 (jobs=4): ✓ 20,536 IOPS
Queue Depth = 16 (jobs=4): ✓ 105,625 IOPS
Queue Depth = 32 (jobs=4): ✓ 110,053 IOPS
Queue Depth = 64 (jobs=4): ✓ 104,814 IOPS
Queue Depth = 128 (jobs=4): ✓ 107,856 IOPS
Queue Depth = 1 (jobs=8): ✓ 35,082 IOPS
Queue Depth = 16 (jobs=8): ✓ 107,837 IOPS
Queue Depth = 32 (jobs=8): ✓ 103,159 IOPS
Queue Depth = 64 (jobs=8): ✓ 107,925 IOPS
Queue Depth = 128 (jobs=8): ✓ 107,315 IOPS

Random Write IOPS Tests: [46-60/90]
Queue Depth = 1 (jobs=1): ✓ 3,534 IOPS
Queue Depth = 16 (jobs=1): ✓ 3,124 IOPS
Queue Depth = 32 (jobs=1): ✓ 2,917 IOPS
Queue Depth = 64 (jobs=1): ✓ 2,934 IOPS
Queue Depth = 128 (jobs=1): ✓ 2,558 IOPS
Queue Depth = 1 (jobs=4): ✓ 2,975 IOPS
Queue Depth = 16 (jobs=4): ✓ 2,975 IOPS
Queue Depth = 32 (jobs=4): ✓ 2,960 IOPS
Queue Depth = 64 (jobs=4): ✓ 2,908 IOPS
Queue Depth = 128 (jobs=4): ✓ 2,880 IOPS
Queue Depth = 1 (jobs=8): ✓ 2,060 IOPS
Queue Depth = 16 (jobs=8): ✓ 2,860 IOPS
Queue Depth = 32 (jobs=8): ✓ 2,787 IOPS
Queue Depth = 64 (jobs=8): ✓ 2,750 IOPS
Queue Depth = 128 (jobs=8): ✓ 2,793 IOPS

Random Read Latency Tests: [61-75/90]
Queue Depth = 1 (jobs=1): ✓ 204.31 µs
Queue Depth = 16 (jobs=1): ✓ 280.21 µs
Queue Depth = 32 (jobs=1): ✓ 385.30 µs
Queue Depth = 64 (jobs=1): ✓ 643.48 µs
Queue Depth = 128 (jobs=1): ✓ 1.28 ms
Queue Depth = 1 (jobs=4): ✓ 197.80 µs
Queue Depth = 16 (jobs=4): ✓ 631.38 µs
Queue Depth = 32 (jobs=4): ✓ 1.23 ms
Queue Depth = 64 (jobs=4): ✓ 2.44 ms
Queue Depth = 128 (jobs=4): ✓ 5.03 ms
Queue Depth = 1 (jobs=8): ✓ 230.62 µs
Queue Depth = 16 (jobs=8): ✓ 1.39 ms
Queue Depth = 32 (jobs=8): ✓ 2.51 ms
Queue Depth = 64 (jobs=8): ✓ 4.96 ms
Queue Depth = 128 (jobs=8): ✓ 9.95 ms

Mixed 70/30 Workload Tests: [76-90/90]
Queue Depth = 1 (jobs=1): ✓ R: 3,605 / W: 1,543 IOPS
Queue Depth = 16 (jobs=1): ✓ R: 8,063 / W: 3,460 IOPS
Queue Depth = 32 (jobs=1): ✓ R: 6,483 / W: 2,783 IOPS
Queue Depth = 64 (jobs=1): ✓ R: 6,481 / W: 2,782 IOPS
Queue Depth = 128 (jobs=1): ✓ R: 6,381 / W: 2,737 IOPS
Queue Depth = 1 (jobs=4): ✓ R: 6,288 / W: 2,707 IOPS
Queue Depth = 16 (jobs=4): ✓ R: 6,202 / W: 2,670 IOPS
Queue Depth = 32 (jobs=4): ✓ R: 6,189 / W: 2,664 IOPS
Queue Depth = 64 (jobs=4): ✓ R: 6,197 / W: 2,667 IOPS
Queue Depth = 128 (jobs=4): ✓ R: 6,045 / W: 2,602 IOPS
Queue Depth = 1 (jobs=8): ✓ R: 5,964 / W: 2,563 IOPS
Queue Depth = 16 (jobs=8): ✓ R: 6,029 / W: 2,589 IOPS
Queue Depth = 32 (jobs=8): ✓ R: 6,065 / W: 2,605 IOPS
Queue Depth = 64 (jobs=8): ✓ R: 6,043 / W: 2,596 IOPS
Queue Depth = 128 (jobs=8): ✓ R: 5,979 / W: 2,568 IOPS

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Benchmark Summary
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Total tests run: 90
Completed: 90


Top Performers (numjobs=1):

Sequential Read: 2.12 GB/s (QD=64 )
Sequential Write: 457.35 MB/s (QD=1 )
Random Read IOPS: 102,721 IOPS (QD=64 )
Random Write IOPS: 3,534 IOPS (QD=1 )
Lowest Latency: 204.31 µs (QD=1 )

Top Performers (numjobs=4):

Sequential Read: 2.13 GB/s (QD=64 )
Sequential Write: 1.22 GB/s (QD=128)
Random Read IOPS: 110,053 IOPS (QD=32 )
Random Write IOPS: 2,975 IOPS (QD=1 )
Lowest Latency: 197.80 µs (QD=1 )

Top Performers (numjobs=8):

Sequential Read: 2.13 GB/s (QD=128)
Sequential Write: 2.02 GB/s (QD=16 )
Random Read IOPS: 107,925 IOPS (QD=64 )
Random Write IOPS: 2,860 IOPS (QD=16 )
Lowest Latency: 230.62 µs (QD=1 )

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

 
Here is the extended test on iSCSI:

FIO Storage Benchmark

Running benchmark on storage: truenas-storage

FIO installation: ✓ fio-3.39
Storage configuration: ✓ Valid (iscsi mode)
Finding available VM ID: ✓ Using VM ID 990
Allocating 10GB test volume: ✓ truenas-storage:vol-fio-bench-1762639514-lun6
Waiting for device (5s): ✓ Ready
Detecting device path: ✓ /dev/mapper/mpathq
Validating device is unused: ✓ Device is safe to test

Starting FIO benchmarks (90 tests, 75-90 minutes total)...

Transport mode: iscsi (testing QD=1, 16, 32, 64, 128)
Extended mode: Testing each QD with numjobs=1, 4, 8

Sequential Read Bandwidth Tests: [1-15/90]
Queue Depth = 1 (jobs=1): ✓ 622.02 MB/s
Queue Depth = 16 (jobs=1): ✓ 2.10 GB/s
Queue Depth = 32 (jobs=1): ✓ 2.10 GB/s
Queue Depth = 64 (jobs=1): ✓ 2.12 GB/s
Queue Depth = 128 (jobs=1): ✓ 2.12 GB/s
Queue Depth = 1 (jobs=4): ✓ 2.01 GB/s
Queue Depth = 16 (jobs=4): ✓ 2.10 GB/s
Queue Depth = 32 (jobs=4): ✓ 2.10 GB/s
Queue Depth = 64 (jobs=4): ✓ 2.13 GB/s
Queue Depth = 128 (jobs=4): ✓ 2.12 GB/s
Queue Depth = 1 (jobs=8): ✓ 2.08 GB/s
Queue Depth = 16 (jobs=8): ✓ 2.11 GB/s
Queue Depth = 32 (jobs=8): ✓ 2.10 GB/s
Queue Depth = 64 (jobs=8): ✓ 2.12 GB/s
Queue Depth = 128 (jobs=8): ✓ 2.13 GB/s

Sequential Write Bandwidth Tests: [16-30/90]
Queue Depth = 1 (jobs=1): ✓ 457.35 MB/s
Queue Depth = 16 (jobs=1): ✓ 295.18 MB/s
Queue Depth = 32 (jobs=1): ✓ 288.48 MB/s
Queue Depth = 64 (jobs=1): ✓ 317.24 MB/s
Queue Depth = 128 (jobs=1): ✓ 311.78 MB/s
Queue Depth = 1 (jobs=4): ✓ 1.13 GB/s
Queue Depth = 16 (jobs=4): ✓ 1.17 GB/s
Queue Depth = 32 (jobs=4): ✓ 1.16 GB/s
Queue Depth = 64 (jobs=4): ✓ 321.79 MB/s
Queue Depth = 128 (jobs=4): ✓ 1.22 GB/s
Queue Depth = 1 (jobs=8): ✓ 1.96 GB/s
Queue Depth = 16 (jobs=8): ✓ 2.02 GB/s
Queue Depth = 32 (jobs=8): ✓ 1.98 GB/s
Queue Depth = 64 (jobs=8): ✓ 1.94 GB/s
Queue Depth = 128 (jobs=8): ✓ 1.93 GB/s

Random Read IOPS Tests: [31-45/90]
Queue Depth = 1 (jobs=1): ✓ 4,998 IOPS
Queue Depth = 16 (jobs=1): ✓ 57,663 IOPS
Queue Depth = 32 (jobs=1): ✓ 83,757 IOPS
Queue Depth = 64 (jobs=1): ✓ 102,721 IOPS
Queue Depth = 128 (jobs=1): ✓ 99,815 IOPS
Queue Depth = 1 (jobs=4): ✓ 20,536 IOPS
Queue Depth = 16 (jobs=4): ✓ 105,625 IOPS
Queue Depth = 32 (jobs=4): ✓ 110,053 IOPS
Queue Depth = 64 (jobs=4): ✓ 104,814 IOPS
Queue Depth = 128 (jobs=4): ✓ 107,856 IOPS
Queue Depth = 1 (jobs=8): ✓ 35,082 IOPS
Queue Depth = 16 (jobs=8): ✓ 107,837 IOPS
Queue Depth = 32 (jobs=8): ✓ 103,159 IOPS
Queue Depth = 64 (jobs=8): ✓ 107,925 IOPS
Queue Depth = 128 (jobs=8): ✓ 107,315 IOPS

Random Write IOPS Tests: [46-60/90]
Queue Depth = 1 (jobs=1): ✓ 3,534 IOPS
Queue Depth = 16 (jobs=1): ✓ 3,124 IOPS
Queue Depth = 32 (jobs=1): ✓ 2,917 IOPS
Queue Depth = 64 (jobs=1): ✓ 2,934 IOPS
Queue Depth = 128 (jobs=1): ✓ 2,558 IOPS
Queue Depth = 1 (jobs=4): ✓ 2,975 IOPS
Queue Depth = 16 (jobs=4): ✓ 2,975 IOPS
Queue Depth = 32 (jobs=4): ✓ 2,960 IOPS
Queue Depth = 64 (jobs=4): ✓ 2,908 IOPS
Queue Depth = 128 (jobs=4): ✓ 2,880 IOPS
Queue Depth = 1 (jobs=8): ✓ 2,060 IOPS
Queue Depth = 16 (jobs=8): ✓ 2,860 IOPS
Queue Depth = 32 (jobs=8): ✓ 2,787 IOPS
Queue Depth = 64 (jobs=8): ✓ 2,750 IOPS
Queue Depth = 128 (jobs=8): ✓ 2,793 IOPS

Random Read Latency Tests: [61-75/90]
Queue Depth = 1 (jobs=1): ✓ 204.31 µs
Queue Depth = 16 (jobs=1): ✓ 280.21 µs
Queue Depth = 32 (jobs=1): ✓ 385.30 µs
Queue Depth = 64 (jobs=1): ✓ 643.48 µs
Queue Depth = 128 (jobs=1): ✓ 1.28 ms
Queue Depth = 1 (jobs=4): ✓ 197.80 µs
Queue Depth = 16 (jobs=4): ✓ 631.38 µs
Queue Depth = 32 (jobs=4): ✓ 1.23 ms
Queue Depth = 64 (jobs=4): ✓ 2.44 ms
Queue Depth = 128 (jobs=4): ✓ 5.03 ms
Queue Depth = 1 (jobs=8): ✓ 230.62 µs
Queue Depth = 16 (jobs=8): ✓ 1.39 ms
Queue Depth = 32 (jobs=8): ✓ 2.51 ms
Queue Depth = 64 (jobs=8): ✓ 4.96 ms
Queue Depth = 128 (jobs=8): ✓ 9.95 ms

Mixed 70/30 Workload Tests: [76-90/90]
Queue Depth = 1 (jobs=1): ✓ R: 3,605 / W: 1,543 IOPS
Queue Depth = 16 (jobs=1): ✓ R: 8,063 / W: 3,460 IOPS
Queue Depth = 32 (jobs=1): ✓ R: 6,483 / W: 2,783 IOPS
Queue Depth = 64 (jobs=1): ✓ R: 6,481 / W: 2,782 IOPS
Queue Depth = 128 (jobs=1): ✓ R: 6,381 / W: 2,737 IOPS
Queue Depth = 1 (jobs=4): ✓ R: 6,288 / W: 2,707 IOPS
Queue Depth = 16 (jobs=4): ✓ R: 6,202 / W: 2,670 IOPS
Queue Depth = 32 (jobs=4): ✓ R: 6,189 / W: 2,664 IOPS
Queue Depth = 64 (jobs=4): ✓ R: 6,197 / W: 2,667 IOPS
Queue Depth = 128 (jobs=4): ✓ R: 6,045 / W: 2,602 IOPS
Queue Depth = 1 (jobs=8): ✓ R: 5,964 / W: 2,563 IOPS
Queue Depth = 16 (jobs=8): ✓ R: 6,029 / W: 2,589 IOPS
Queue Depth = 32 (jobs=8): ✓ R: 6,065 / W: 2,605 IOPS
Queue Depth = 64 (jobs=8): ✓ R: 6,043 / W: 2,596 IOPS
Queue Depth = 128 (jobs=8): ✓ R: 5,979 / W: 2,568 IOPS

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Benchmark Summary
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Total tests run: 90
Completed: 90


Top Performers (numjobs=1):

Sequential Read: 2.12 GB/s (QD=64 )
Sequential Write: 457.35 MB/s (QD=1 )
Random Read IOPS: 102,721 IOPS (QD=64 )
Random Write IOPS: 3,534 IOPS (QD=1 )
Lowest Latency: 204.31 µs (QD=1 )

Top Performers (numjobs=4):

Sequential Read: 2.13 GB/s (QD=64 )
Sequential Write: 1.22 GB/s (QD=128)
Random Read IOPS: 110,053 IOPS (QD=32 )
Random Write IOPS: 2,975 IOPS (QD=1 )
Lowest Latency: 197.80 µs (QD=1 )

Top Performers (numjobs=8):

Sequential Read: 2.13 GB/s (QD=128)
Sequential Write: 2.02 GB/s (QD=16 )
Random Read IOPS: 107,925 IOPS (QD=64 )
Random Write IOPS: 2,860 IOPS (QD=16 )
Lowest Latency: 230.62 µs (QD=1 )

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Here is the TrueNAS graphs of the NICS... showing round robin use of the 10g nics...
1762642250063.png
 
Here is the TrueNAS graphs of the NICS... showing round robin use of the 10g nics...
View attachment 92543


Here is the extended NVME tests on my system:


FIO Storage Benchmark

Running benchmark on storage: truenas-nvme

FIO installation: ✓ fio-3.39
Storage configuration: ✓ Valid (nvme-tcp mode)
Finding available VM ID: ✓ Using VM ID 990
Allocating 10GB test volume: ✓ truenas-nvme:vol-fio-bench-1762642144-ns5b34979b-ec15-4e0a-9df6-282c831b2dbf
Waiting for device (5s): ✓ Ready
Detecting device path: ✓ /dev/nvme1n6
Validating device is unused: ✓ Device is safe to test

Starting FIO benchmarks (90 tests, 75-90 minutes total)...

Transport mode: nvme-tcp (testing QD=1, 16, 32, 64, 128)
Extended mode: Testing each QD with numjobs=1, 4, 8

Sequential Read Bandwidth Tests: [1-15/90]
Queue Depth = 1 (jobs=1): ✓ 521.87 MB/s
Queue Depth = 16 (jobs=1): ✓ 1.09 GB/s
Queue Depth = 32 (jobs=1): ✓ 1.10 GB/s
Queue Depth = 64 (jobs=1): ✓ 1.10 GB/s
Queue Depth = 128 (jobs=1): ✓ 1.10 GB/s
Queue Depth = 1 (jobs=4): ✓ 1.09 GB/s
Queue Depth = 16 (jobs=4): ✓ 1.10 GB/s
Queue Depth = 32 (jobs=4): ✓ 1.10 GB/s
Queue Depth = 64 (jobs=4): ✓ 1.10 GB/s
Queue Depth = 128 (jobs=4): ✓ 1.10 GB/s
Queue Depth = 1 (jobs=8): ✓ 1.10 GB/s
Queue Depth = 16 (jobs=8): ✓ 1.10 GB/s
Queue Depth = 32 (jobs=8): ✓ 1.10 GB/s
Queue Depth = 64 (jobs=8): ✓ 1.10 GB/s
Queue Depth = 128 (jobs=8): ✓ 1.10 GB/s

Sequential Write Bandwidth Tests: [16-30/90]
Queue Depth = 1 (jobs=1): ✓ 409.58 MB/s
Queue Depth = 16 (jobs=1): ✓ 283.23 MB/s
Queue Depth = 32 (jobs=1): ✓ 283.31 MB/s
Queue Depth = 64 (jobs=1): ✓ 288.06 MB/s
Queue Depth = 128 (jobs=1): ✓ 276.31 MB/s
Queue Depth = 1 (jobs=4): ✓ 502.87 MB/s
Queue Depth = 16 (jobs=4): ✓ 1.07 GB/s
Queue Depth = 32 (jobs=4): ✓ 0.98 GB/s
Queue Depth = 64 (jobs=4): ✓ 721.09 MB/s
Queue Depth = 128 (jobs=4): ✓ 691.33 MB/s
Queue Depth = 1 (jobs=8): ✓ 1.00 GB/s
Queue Depth = 16 (jobs=8): ✓ 1.09 GB/s
Queue Depth = 32 (jobs=8): ✓ 1.07 GB/s
Queue Depth = 64 (jobs=8): ✓ 808.90 MB/s
Queue Depth = 128 (jobs=8): ✓ 0.98 GB/s

Random Read IOPS Tests: [31-45/90]
Queue Depth = 1 (jobs=1): ✓ 5,867 IOPS
Queue Depth = 16 (jobs=1): ✓ 59,713 IOPS
Queue Depth = 32 (jobs=1): ✓ 82,456 IOPS
Queue Depth = 64 (jobs=1): ✓ 98,946 IOPS
Queue Depth = 128 (jobs=1): ✓ 121,065 IOPS
Queue Depth = 1 (jobs=4): ✓ 24,810 IOPS
Queue Depth = 16 (jobs=4): ✓ 243,454 IOPS
Queue Depth = 32 (jobs=4): ✓ 252,704 IOPS
Queue Depth = 64 (jobs=4): ✓ 253,784 IOPS
Queue Depth = 128 (jobs=4): ✓ 255,202 IOPS
Queue Depth = 1 (jobs=8): ✓ 50,759 IOPS
Queue Depth = 16 (jobs=8): ✓ 247,661 IOPS
Queue Depth = 32 (jobs=8): ✓ 248,588 IOPS
Queue Depth = 64 (jobs=8): ✓ 250,973 IOPS
Queue Depth = 128 (jobs=8): ✓ 252,309 IOPS

Random Write IOPS Tests: [46-60/90]
Queue Depth = 1 (jobs=1): ✓ 6,616 IOPS
Queue Depth = 16 (jobs=1): ✓ 24,346 IOPS
Queue Depth = 32 (jobs=1): ✓ 20,290 IOPS
Queue Depth = 64 (jobs=1): ✓ 19,154 IOPS
Queue Depth = 128 (jobs=1): ✓ 19,031 IOPS
Queue Depth = 1 (jobs=4): ✓ 19,315 IOPS
Queue Depth = 16 (jobs=4): ✓ 19,141 IOPS
Queue Depth = 32 (jobs=4): ✓ 18,847 IOPS
Queue Depth = 64 (jobs=4): ✓ 18,920 IOPS
Queue Depth = 128 (jobs=4): ✓ 18,812 IOPS
Queue Depth = 1 (jobs=8): ✓ 18,757 IOPS
Queue Depth = 16 (jobs=8): ✓ 18,878 IOPS
Queue Depth = 32 (jobs=8): ✓ 18,408 IOPS
Queue Depth = 64 (jobs=8): ✓ 18,077 IOPS
Queue Depth = 128 (jobs=8): ✓ 18,193 IOPS

Random Read Latency Tests: [61-75/90]
Queue Depth = 1 (jobs=1): ✓ 173.55 µs
Queue Depth = 16 (jobs=1): ✓ 267.77 µs
Queue Depth = 32 (jobs=1): ✓ 380.59 µs
Queue Depth = 64 (jobs=1): ✓ 648.40 µs
Queue Depth = 128 (jobs=1): ✓ 1.05 ms
Queue Depth = 1 (jobs=4): ✓ 163.74 µs
Queue Depth = 16 (jobs=4): ✓ 257.94 µs
Queue Depth = 32 (jobs=4): ✓ 463.08 µs
Queue Depth = 64 (jobs=4): ✓ 918.37 µs
Queue Depth = 128 (jobs=4): ✓ 1.82 ms
Queue Depth = 1 (jobs=8): ✓ 156.26 µs
Queue Depth = 16 (jobs=8): ✓ 460.97 µs
Queue Depth = 32 (jobs=8): ✓ 919.90 µs
Queue Depth = 64 (jobs=8): ✓ 1.83 ms
Queue Depth = 128 (jobs=8): ✓ 3.63 ms

Mixed 70/30 Workload Tests: [76-90/90]
Queue Depth = 1 (jobs=1): ✓ R: 4,197 / W: 1,796 IOPS
Queue Depth = 16 (jobs=1): ✓ R: 39,615 / W: 16,984 IOPS
Queue Depth = 32 (jobs=1): ✓ R: 50,881 / W: 21,827 IOPS
Queue Depth = 64 (jobs=1): ✓ R: 42,119 / W: 18,058 IOPS
Queue Depth = 128 (jobs=1): ✓ R: 41,718 / W: 17,883 IOPS
Queue Depth = 1 (jobs=4): ✓ R: 18,230 / W: 7,830 IOPS
Queue Depth = 16 (jobs=4): ✓ R: 50,783 / W: 21,847 IOPS
Queue Depth = 32 (jobs=4): ✓ R: 42,544 / W: 18,302 IOPS
Queue Depth = 64 (jobs=4): ✓ R: 41,178 / W: 17,715 IOPS
Queue Depth = 128 (jobs=4): ✓ R: 42,496 / W: 18,285 IOPS
Queue Depth = 1 (jobs=8): ✓ R: 32,645 / W: 13,993 IOPS
Queue Depth = 16 (jobs=8): ✓ R: 57,838 / W: 24,813 IOPS
Queue Depth = 32 (jobs=8): ✓ R: 51,365 / W: 22,030 IOPS
Queue Depth = 64 (jobs=8): ✓ R: 57,195 / W: 24,536 IOPS
Queue Depth = 128 (jobs=8): ✓ R: 52,737 / W: 22,619 IOPS

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Benchmark Summary
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Total tests run: 90
Completed: 90


Top Performers (numjobs=1):

Sequential Read: 1.10 GB/s (QD=32 )
Sequential Write: 409.58 MB/s (QD=1 )
Random Read IOPS: 121,065 IOPS (QD=128)
Random Write IOPS: 24,346 IOPS (QD=16 )
Lowest Latency: 173.55 µs (QD=1 )

Top Performers (numjobs=4):

Sequential Read: 1.10 GB/s (QD=16 )
Sequential Write: 1.07 GB/s (QD=16 )
Random Read IOPS: 255,202 IOPS (QD=128)
Random Write IOPS: 19,315 IOPS (QD=1 )
Lowest Latency: 163.74 µs (QD=1 )

Top Performers (numjobs=8):

Sequential Read: 1.10 GB/s (QD=1 )
Sequential Write: 1.09 GB/s (QD=16 )
Random Read IOPS: 252,309 IOPS (QD=128)
Random Write IOPS: 18,878 IOPS (QD=16 )
Lowest Latency: 156.26 µs (QD=1 )

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

1762644703183.png
 
@warlocksyno is there any reason why PVE 8.4.1 and TrueNAS Version: 25.10.0 - Goldeye would not support or work with your plugin.
everytime I try to create a VM (either with your test script or the webui) I get this error.
I have completely removed the plugin from all of my PVE hosts and reinstalled both with the documentation and the "install.sh" script.
this is my storage.cfg
truenasplugin: dev-stor
api_host 172.16.8.1
api_key 3-KenjZNNxD0dv7XSuv4KqDeCyxcEfvZLyugeRE6BKcSlemTCWSe4WLKwJUyLMy5u3
dataset dev-stor/iscsi/dev-stor
target_iqn iqn.2005-10.org.freenas.ctl:dev-stor
api_insecure 1
shared 1
discovery_portal 172.16.8.1:3260
zvol_blocksize 16K
tn_sparse 1
use_multipath 0
content images

and my websocket conf
truenasplugin: dev-stor
api_host 172.16.8.1
api_key 3-KenjZNNxD0dv7XSuv4KqDeCyxcEfvZLyugeRE6BKcSlemTCWSe4WLKwJUyLMy5u3
api_transport ws
api_scheme wss
api_port 443
dataset dev-stor/iscsi/dev-stor
target_iqn iqn.2005-10.org.freenas.ctl:dev-stor
# api_insecure 1
shared 1
discovery_portal 172.16.8.1:3260
zvol_blocksize 128K
tn_sparse 1
# use_multipath 0
content images
vmstate_storage local

Running health check on storage: dev-stor

Plugin file: ✓ Installed v1.1.3
Storage configuration: ✓ Configured
Storage status: ✓ Active (16544.16GB / 33097.22GB used, 49.99%)
Content type: ✓ images
TrueNAS API: ✓ Reachable on 172.16.8.1:443
Dataset: ✓ dev-stor/iscsi/dev-stor
Target IQN: ✓ iqn.2005-10.org.freenas.ctl:dev-stor
Discovery portal: ✓ 172.16.8.1:3260
iSCSI sessions: ✓ 1 active session(s)
Multipath: - Not enabled
Orphaned resources: ✓ None found
PVE daemon: ✓ Running
 

Attachments

Last edited:
@warlocksyno is there any reason why PVE 8.4.1 and TrueNAS Version: 25.10.0 - Goldeye would not support or work with your plugin.
everytime I try to create a VM (either with your test script or the webui) I get this error.
I have completely removed the plugin from all of my PVE hosts and reinstalled both with the documentation and the "install.sh" script.
this is my storage.cfg
truenasplugin: dev-stor
api_host 172.16.8.1
api_key 3-KenjZNNxD0dv7XSuv4KqDeCyxcEfvZLyugeRE6BKcSlemTCWSe4WLKwJUyLMy5u3
dataset dev-stor/iscsi/dev-stor
target_iqn iqn.2005-10.org.freenas.ctl:dev-stor
api_insecure 1
shared 1
discovery_portal 172.16.8.1:3260
zvol_blocksize 16K
tn_sparse 1
use_multipath 0
content images

and my websocket conf
truenasplugin: dev-stor
api_host 172.16.8.1
api_key 3-KenjZNNxD0dv7XSuv4KqDeCyxcEfvZLyugeRE6BKcSlemTCWSe4WLKwJUyLMy5u3
api_transport ws
api_scheme wss
api_port 443
dataset dev-stor/iscsi/dev-stor
target_iqn iqn.2005-10.org.freenas.ctl:dev-stor
# api_insecure 1
shared 1
discovery_portal 172.16.8.1:3260
zvol_blocksize 128K
tn_sparse 1
# use_multipath 0
content images
vmstate_storage local
I had a similar issue with pve9.x The error indicates that you are not using a ZFS volume, are you using a dataset instead of a volume? I fought the setup quite a bit but don't have the exact steps written out, but I would look at your pathing on the dataset to volume on your truenas an ensure it is correct.
 
  • Like
Reactions: jt_telrite
@warlocksyno is there any reason why PVE 8.4.1 and TrueNAS Version: 25.10.0 - Goldeye would not support or work with your plugin.
everytime I try to create a VM (either with your test script or the webui) I get this error.
I have completely removed the plugin from all of my PVE hosts and reinstalled both with the documentation and the "install.sh" script.
this is my storage.cfg

Dataset: ✓ dev-stor/iscsi/dev-stor
Based on the error and your config I'd say it's probably something to do with the folder dataset structer.

On TrueNAS you will make a Dataset on your pool, then you'd put it there. So pool/dataset or you can have pool/dataset/dataset but you probably have pool/dataset/zvol - You just want a plane dataset for the path.

For instance, here in my pool I have two datasets. In the datasets are the zVols that the plugin creates.

1762982540772.png

So in my config I have it set to:
dataset flash/iscsi-testing
or
dataset flash/nvme-testing
 
I had a similar issue with pve9.x The error indicates that you are not using a ZFS volume, are you using a dataset instead of a volume? I fought the setup quite a bit but don't have the exact steps written out, but I would look at your pathing on the dataset to volume on your truenas an ensure it is correct.
1762983612747.png
Name Type Status Total Used Available %
dev-stor truenasplugin active 28115551189 10758412928 17357138261 38.26%
dlk-pbs101 pbs active 43651411200 20364494208 23286916992 46.65%
local dir active 236441600 128 236441472 0.00%
local-zfs zfspool active 236441648 96 236441552 0.00%
nfs_pool nfs active 6608261120 9535488 6598725632 0.14%
vm_pool zfspool active 4547641344 29467 4547611876 0.00%
root@dlk0entpve801:~#

truenasplugin: dev-stor
api_host 172.16.8.1
api_key 3-KenjZNNxD0dv7XSuv4KqDeCyxcEfvZLyugeRE6BKcSlemTCWSe4WLKwJUyLMy5u3
api_transport ws
api_scheme wss
api_port 443
dataset dev-stor/vm
target_iqn iqn.2005-10.org.freenas.ctl:vm
api_insecure 1
shared 1
discovery_portal 172.16.8.1:3260
zvol_blocksize 128K
tn_sparse 1
use_multipath 0
content images
vmstate_storage local
 
View attachment 92713
Name Type Status Total Used Available %
dev-stor truenasplugin active 28115551189 10758412928 17357138261 38.26%
dlk-pbs101 pbs active 43651411200 20364494208 23286916992 46.65%
local dir active 236441600 128 236441472 0.00%
local-zfs zfspool active 236441648 96 236441552 0.00%
nfs_pool nfs active 6608261120 9535488 6598725632 0.14%
vm_pool zfspool active 4547641344 29467 4547611876 0.00%
root@dlk0entpve801:~#

truenasplugin: dev-stor
api_host 172.16.8.1
api_key 3-KenjZNNxD0dv7XSuv4KqDeCyxcEfvZLyugeRE6BKcSlemTCWSe4WLKwJUyLMy5u3
api_transport ws
api_scheme wss
api_port 443
dataset dev-stor/vm
target_iqn iqn.2005-10.org.freenas.ctl:vm
api_insecure 1
shared 1
discovery_portal 172.16.8.1:3260
zvol_blocksize 128K
tn_sparse 1
use_multipath 0
content images
vmstate_storage local


The setup I have in my TrueNAS testing is as follows:

I have found you must have a ( Pool - Dataset - ZFS Volume )

And the "dataset" line in the /etc/pve/storage.cfg file for my setup is:

dataset tank/proxmox


1762984630977.png
 
@jt_telrite I added a dataset validation check to the healthcheck tool, so it should now tell you if it's the correct dataset type.
View attachment 92719
If you run the latest install.sh from the Alpha branch it should have it. :)
Is there anyway to programically check for a pool, dataset, and volume and if they don't exist offer to build them to streamline the install? The TrueNAS system has both a wizard an manual methods but I have found that each create the volumes slightly differntly and I am sure more through my lack of understanding of it all more than anything, but they don't always work. And in my testing I have often had to create a ZFS volume of any size 100mb or 1gb, or 1tb, it doesn't seem to mater as it seems to only be a placeholder for the volumes to be autocreated when creating disks at the VM level.

I could be completely misunderstanding how this works however, so if its possible to explain it I would appricieate it.

Thank you!
 
@jt_telrite I added a dataset validation check to the healthcheck tool, so it should now tell you if it's the correct dataset type.
View attachment 92719
If you run the latest install.sh from the Alpha branch it should have it. :)
After your explanation, I was able to get much further....it's now making disks but can't mount them because there is no iscsi session . I have had this issue and it's because of "orphaned" configurations in the iscsiadm database. it seems they don't automatically get updated when you change the TrueNAS configuration. I also had to associate the IGN with the new "Share" configuration....
p.s. should I be working with the ALPHA branch or BETA?
 
Last edited:
After your explanation, I was able to get much further....it's now making disks but can't mount them because there is no iscsi session . I have had this issue and it's because of "orphaned" configurations in the iscsiadm database. it seems they don't automatically get updated when you change the TrueNAS configuration.
p.s. should I be working with the ALPHA branch or BETA?

I have had no issues with iSCSI with the main / beta / alpha branches. The alpha branch has the install scripts but I have done it manually many times in testing. One item of note is that on my tests the "iscsiadm -m session" does not show any sessions unless a VM is active and the disk is running. I don't know if this is due to my setup or not as I am using multipathing.

I have not run into any "orphaned" issues with basic iSCSI testing once I have it talking to the TrueNAS but I am using PVE 9.011 so there may be a difference there vs 8.x

I did have some issues in early testing when I was using a slightly earlier version of TrueNAS when I setup the api keys with root vs truenas_admin the default user. I had better success using truenas_admin user when setting up the API key.
 
@jt_telrite I added a dataset validation check to the healthcheck tool, so it should now tell you if it's the correct dataset type.
View attachment 92719
If you run the latest install.sh from the Alpha branch it should have it. :)
Health Check

Running health check on storage: dev-stor

Plugin file: ✓ Installed v1.1.3
Storage configuration: ✓ Configured
Storage status: ✓ Active (0.00GB / 16552.25GB used, 0.00%)
Content type: ✓ images
TrueNAS API: ✓ Reachable on 172.16.8.1:443
Dataset: ✓ dev-stor/vm
Target IQN: ✓ iqn.2005-10.org.freenas.ctl:vm
Discovery portal: ✓ 172.16.8.1:3260
iSCSI sessions: ✓ 1 active session(s)
Multipath: - Not enabled
Orphaned resources: ✓ None found
PVE daemon: ✓ Running

Health Summary:
Checks passed: 11/11 (1 not applicable)
Status: HEALTHY
 
Does anyone have experience with TrueNAS that when you enlarge the disk in the Proxmox UI the file system becomes read-only?
 
Health Check

Running health check on storage: dev-stor

Plugin file: ✓ Installed v1.1.3
Storage configuration: ✓ Configured
Storage status: ✓ Active (0.00GB / 16552.25GB used, 0.00%)
Content type: ✓ images
TrueNAS API: ✓ Reachable on 172.16.8.1:443
Dataset: ✓ dev-stor/vm
Target IQN: ✓ iqn.2005-10.org.freenas.ctl:vm
Discovery portal: ✓ 172.16.8.1:3260
iSCSI sessions: ✓ 1 active session(s)
Multipath: - Not enabled
Orphaned resources: ✓ None found
PVE daemon: ✓ Running

Health Summary:
Checks passed: 11/11 (1 not applicable)
Status: HEALTHY
Running benchmark on storage: dev-stor

FIO installation: ✓ fio-3.33
Storage configuration: ✓ Valid (iscsi mode)
Finding available VM ID: ✓ Using VM ID 990
Allocating 10GB test volume: ✓ dev-stor:vol-fio-bench-1763054400-lun1
Waiting for device (5s): ✓ Ready
Detecting device path: ✓ /dev/disk/by-path/ip-172.16.8.1:3260-iscsi-iqn.2005-10.org.freenas.ctl:vm-lun-1
Validating device is unused: ✓ Device is safe to test

Starting FIO benchmarks (30 tests, 25-30 minutes total)...

Transport mode: iscsi (testing QD=1, 16, 32, 64, 128)

Sequential Read Bandwidth Tests: [1-5/30]
Queue Depth = 1: ✓ 2.50 GB/s
Queue Depth = 16: ✓ 1.79 GB/s
Queue Depth = 32: ✓ 1.39 GB/s
Queue Depth = 64: ✓ 1.41 GB/s
Queue Depth = 128: ✓ 1.44 GB/s

Sequential Write Bandwidth Tests: [6-10/30]
Queue Depth = 1: ✓ 733.06 MB/s
Queue Depth = 16: ✓ 344.93 MB/s
Queue Depth = 32: ✓ 340.33 MB/s
Queue Depth = 64: ✓ 321.67 MB/s
Queue Depth = 128: ✓ 399.48 MB/s

Random Read IOPS Tests: [11-15/30]
Queue Depth = 1: ✓ 24,723 IOPS
Queue Depth = 16: ✓ 22,850 IOPS
Queue Depth = 32: ✓ 24,025 IOPS
Queue Depth = 64: ✓ 23,515 IOPS
Queue Depth = 128: ✓ 23,398 IOPS

Random Write IOPS Tests: [16-20/30]
Queue Depth = 1: ✓ 5,347 IOPS
Queue Depth = 16: ✓ 3,384 IOPS
Queue Depth = 32: ✓ 3,278 IOPS
Queue Depth = 64: ✓ 3,431 IOPS
Queue Depth = 128: ✓ 3,251 IOPS

Random Read Latency Tests: [21-25/30]
Queue Depth = 1: ✓ 38.02 µs
Queue Depth = 16: ✓ 752.98 µs
Queue Depth = 32: ✓ 1.32 ms
Queue Depth = 64: ✓ 2.63 ms
Queue Depth = 128: ✓ 5.86 ms

Mixed 70/30 Workload Tests: [26-30/30]
Queue Depth = 1: ✓ R: 10,545 / W: 4,529 IOPS
Queue Depth = 16: ✓ R: 7,229 / W: 3,109 IOPS
Queue Depth = 32: ✓ R: 7,251 / W: 3,118 IOPS
Queue Depth = 64: ✓ R: 7,715 / W: 3,312 IOPS
Queue Depth = 128: ✓ R: 7,227 / W: 3,108 IOPS

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Benchmark Summary
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Total tests run: 30
Completed: 30

Top Performers:

Sequential Read: 2.50 GB/s (QD=1 )
Sequential Write: 733.06 MB/s (QD=1 )
Random Read IOPS: 24,723 IOPS (QD=1 )
Random Write IOPS: 5,347 IOPS (QD=1 )
Lowest Latency: 38.02 µs (QD=1 )

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Press Enter to return to diagnostics menu...
 
@warlocksyno @curruscanis have either of you experiences the iscsi session being dropped when deleting a vm from the PVE UI ?

Uhh, yes! This should actually be fixed with the "weight" volume that should be created automatically. This keeps the iSCSI target alive on the TrueNAS side. TrueNAS will turn off it's portal advertising when there's no volume being shared. So the plug is supposed to create a "weight" volume that is always alive to prevent that from happening. Let me know if you see that volume, if not, I will look into why it's not being created.

Is there anyway to programically check for a pool, dataset, and volume and if they don't exist offer to build them to streamline the install? The TrueNAS system has both a wizard an manual methods but I have found that each create the volumes slightly differntly and I am sure more through my lack of understanding of it all more than anything, but they don't always work. And in my testing I have often had to create a ZFS volume of any size 100mb or 1gb, or 1tb, it doesn't seem to mater as it seems to only be a placeholder for the volumes to be autocreated when creating disks at the VM level.

I could be completely misunderstanding how this works however, so if its possible to explain it I would appricieate it.

Thank you!

Yes! That's actually in the works at the moment. I'm updating the storage configuration tool to go step-by-step to pick the pool, create a dataset that's optimal for your setup (or put in manual settings), and then enable iSCSI/NVMe blah blah blah. That way it's basically as easy as configuring pools and then creating an API key in TrueNAS, then from that point on there isn't much of a reason to actually log into TrueNAS any more unless you're needing to do some really manual work.

After your explanation, I was able to get much further....it's now making disks but can't mount them because there is no iscsi session . I have had this issue and it's because of "orphaned" configurations in the iscsiadm database. it seems they don't automatically get updated when you change the TrueNAS configuration. I also had to associate the IGN with the new "Share" configuration....
p.s. should I be working with the ALPHA branch or BETA?

Hmm, are you saying there's orphaned iSCSI targets that are trying to be logged into?

This should be the only thing you need to do in TrueNAS after setting up the dataset you want:
1763068744372.png

Once the target is created with the Portal Group ID and Initiator ID configured, the plugin should take care of the rest. If not, let me know.

And Alpha branch has the latest and greatest, the beta one will have a more stable updates to it. For now I would use the alpha branch while troubleshooting.
Running benchmark on storage: dev-stor

Top Performers:

Sequential Read: 2.50 GB/s (QD=1 )
Sequential Write: 733.06 MB/s (QD=1 )
Random Read IOPS: 24,723 IOPS (QD=1 )
Random Write IOPS: 5,347 IOPS (QD=1 )
Lowest Latency: 38.02 µs (QD=1 )

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Press Enter to return to diagnostics menu...

That's interesting you are getting some somewhat okay performance on the writes. Do you have sync=off on your dataset?