TrueNAS Storage Plugin

Started work on a one line installer

View attachment 92127

Currently available in the alpha branch:
https://github.com/WarlockSyno/TrueNAS-Proxmox-VE-Storage-Plugin/tree/alpha
That looks great, I have been installing and reinstalling this plugin system for various trials and the various install steps can be tiresome. It will be very nice if you can have it capable of install like this. I also like the idea of the health check tools and backups. I will give it a try this week in a multipath enviorment.

What do you expect the impact of Proxmox updates may have to the stability of the plugin from time to time?
 
I would suggest you create an apt repo as per proxmox documentation.
Definitely on the list of todos, haven't had experience with publishing packages though. But it's something to learn!
What do you expect the impact of Proxmox updates may have to the stability of the plugin from time to time?
I don't imagine there's much in the way of updates that a Proxmox update could break unless they totally redo their APIs. But Proxmox is heavily used personally and professionally around here, so always keeping an active look out for big changes.
 
I have not had much luck with the one line installer. I spun up a new proxmox server to give it a go, but haven't gotten anywhere. I get the following with the wget link:
root@pve:/tmp# wget -qO- https://github.com/WarlockSyno/TrueNAS-Proxmox-VE-Storage-Plugin/blob/alpha/install.sh | bash
bash: line 7: syntax error near unexpected token `newline'
bash: line 7: `<!DOCTYPE html>'


and the curl link:
root@pve:/tmp# curl -sSL https://raw.githubusercontent.com/WarlockSyno/truenasplugin/main/install.sh | bash
bash: line 1: 404:: command not found
root@pve:/tmp#


I created the install.sh file in a /tmp directory from the source on the alpha, and was prompted with:
root@pve:/tmp# ./install.sh
Checking system requirements...
Missing required dependencies: jq

Please install missing dependencies:
apt-get update && apt-get install -y jq

Once I installed the dependancy I got:

./install.sh
Checking system requirements...
System requirements satisfied

d8P
d888888P
?88' 88bd88b?88 d8P d8888b 88bd88b d888b8b .d888b,
88P 88P' `d88 88 d8b_,dP 88P' ?8bd8P' ?88 ?8b,
88b d88 ?8( d88 88b d88 88P88b ,88b `?8b
`?8b d88' `?88P'?8b`?888P'd88' 88b`?88P'`88b`?888P'

d8b d8,
88P `8P
d88
?88,.d88b,888 ?88 d8P d888b8b 88b 88bd88b
`?88' ?88?88 d88 88 d8P' ?88 88P 88P' ?8b
88b d8P 88b ?8( d88 88b ,88b d88 d88 88P
888888P' 88b`?88P'?8b`?88P'`88bd88' d88' 88b
88P' )88
d88 ,88P For Proxmox VE
?8P `?8888P

root@pve:/tmp#

and the script just exits, no prompts...

Just figured I would give you my experience... thanks.
 
I used AI to teach me how to build an apt repo using github workflows. Feel free to have a look at my repo for help. Bare in mind that I do my build in a different repo so you will have to look at both. Not sure I would do it that way again but I was copying freenas-proxmox at the time.
 
Last edited:
  • Like
Reactions: warlocksyno
Still doing the current testing on my 1GbE network, but did get some of the plugin refactored to support NVMe. It is a definite improvement in performance, even on the slower networking.
1761773356365.png
 
I did some testing this morning on a new build of Proxmox connecting to TrueNAS scale - disk pool is mirrored vdevs 2x4, 8 drives 1tb WD black - connected via 10g multipathing. On a ubuntu vm I am getting the following disk performance to the TrueNAS disk:
1761842976437.png

Also here is a test to the local m.2 NVME on the proxmox test server:
1761843722861.png



And here is the "fio" test from the proxmox test server:

parallel-read: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=16
...
fio-3.39
Starting 4 processes
parallel-read: Laying out IO file (1 file / 2048MiB)
Jobs: 4 (f=4): [R(4)][100.0%][r=22.0GiB/s][r=22.5k IOPS][eta 00m:00s]
parallel-read: (groupid=0, jobs=4): err= 0: pid=34466: Thu Oct 30 11:00:05 2025
read: IOPS=21.4k, BW=20.9GiB/s (22.5GB/s)(628GiB/30002msec)
slat (usec): min=59, max=1051, avg=185.40, stdev=11.42
clat (nsec): min=1244, max=5554.6k, avg=2797681.68, stdev=147086.98
lat (usec): min=189, max=5947, avg=2983.08, stdev=156.49
clat percentiles (usec):
| 1.00th=[ 2606], 5.00th=[ 2638], 10.00th=[ 2638], 20.00th=[ 2671],
| 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802],
| 70.00th=[ 2835], 80.00th=[ 2900], 90.00th=[ 2999], 95.00th=[ 3064],
| 99.00th=[ 3294], 99.50th=[ 3392], 99.90th=[ 3556], 99.95th=[ 3654],
| 99.99th=[ 3818]
bw ( MiB/s): min=20312, max=22598, per=100.00%, avg=21455.31, stdev=172.38, samples=237
iops : min=20312, max=22598, avg=21455.30, stdev=172.38, samples=237
lat (usec) : 2=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
lat (msec) : 2=0.01%, 4=99.98%, 10=0.01%
cpu : usr=0.51%, sys=98.18%, ctx=1048802, majf=0, minf=16424
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=643496,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
READ: bw=20.9GiB/s (22.5GB/s), 20.9GiB/s-20.9GiB/s (22.5GB/s-22.5GB/s), io=628GiB (675GB), run=30002-30002msec
 
The alpha branch now has full NVMe-TCP integration and documentation. I'll be running a slew of tests to make sure it's stable. I'd appreciate if any one who can test it out give some feedback! It'd be nice to know if the documentation is clear enough and if there's any performance or functional gotchas you run into while using it!
 
I have a new 3 node cluster of PVE 8.4.0 and a new Truenas 25.10.0 .
I have followed all of the setup instructions and have a "storage" item available in PVE.
from the "tools" folder I ran the health-check.sh and I get this:
=== TrueNAS Plugin Health Check ===
Storage: iscsi-vm

Plugin file: ✓ Installed v1.1.1
Storage configuration: ✓ Configured
Storage status: ✓ Active (17347807232% free)
Content type: ✓ images
TrueNAS API: ✓ Reachable on 172.16.8.1:443
Dataset: ✓ data/iscsi_data/iscsi-vm
Target IQN: ✓ iqn.2005-10.org.freenas.ctl:iscsi-vm
Discovery portal: ✓ 172.16.8.1:3260
./health-check.sh: line 190: [: 0
0: integer expression expected
iSCSI sessions: ⚠ No active sessions
Multipath: - Not enabled
Orphaned resources: ✓ No orphans found
PVE daemon: ✓ Running

=== Health Summary ===
Checks passed: 10/12
Status: WARNING (1 warning(s))

is this the expected output? "iSCSI sessions: ⚠ No active sessions"
 
when I run the "dev-truenas-plugin-full-function-test.sh" I get a test VM without a disk...
root@dlk0entpve801:~/TrueNAS-Proxmox-VE-Storage-Plugin/tools# cat test-results-20251104-123524.log
╔════════════════════════════════════════════════════════════════════╗
║ TrueNAS Plugin Comprehensive Test Suite v1.1 ║
╚════════════════════════════════════════════════════════════════════╝

[INFO] Configuration:
[INFO] Storage ID: tnscale
[INFO] Node: dlk0entpve801
[INFO] VMID Range: 9001-9031
[INFO] Test Sizes: 1 10 32 100 GB
[INFO] Log File: test-results-20251104-123524.log
[INFO] Cluster Mode: YES (target: dlk0entpve802)
[INFO] Backup Store: NOT SET (backup tests will be skipped)

════════════════════════════════════════════════════════════════════
PHASE 1: Pre-flight Cleanup
════════════════════════════════════════════════════════════════════

[INFO] Pre-flight cleanup: checking for orphaned resources in VMID range 9001-9031 (cluster-wide)...
[INFO] Querying cluster resources...
[INFO] Querying storage for all disks...
[OK] No VMs found in range
[OK] No orphaned resources found

════════════════════════════════════════════════════════════════════
PHASE 2: Disk Allocation Tests
════════════════════════════════════════════════════════════════════

[1] Testing: Allocate 1GB disk (VMID 9001)
root@dlk0entpve801:~/TrueNAS-Proxmox-VE-Storage-Plugin/tools#
when I try adding a new disk to this VM I get and error "Error: unexpected status"
 

Attachments

Last edited:
Hey @jt_telrite - I think I found the issue, try updating to the latest version available in the alpha branch.

Made some siginifcant changes to how the the plugin treats actions that require lots of API calls. Here's the changelog:

Markdown (GitHub flavored):
## Version 1.1.3 (November 5, 2025)

###  **Major Performance Improvements**

#### **List Performance - N+1 Query Pattern Elimination**
- **Dramatic speed improvements for storage listing operations** - Up to 7.5x faster for large deployments
  - **10 volumes**: 2.3s → 1.7s (1.4x faster, 28% reduction)
  - **50 volumes**: 6.7s → 1.8s (3.7x faster, 73% reduction)
  - **100 volumes**: 18.2s → 2.4s (7.5x faster, 87% reduction)
  - **Per-volume cost**: 182ms → 24ms (87% reduction)
  - **Extrapolated 1000 volumes**: ~182s (3min) → ~24s (8x improvement)
- **Root cause**: `list_images` was making individual `_tn_dataset_get()` API calls for each volume (O(n) API requests)
- **Solution**: Implemented batch dataset fetching with single `pool.dataset.query` API call
  - Fetches all child datasets at once with TrueNAS query filter
  - Builds O(1) hash lookup table for dataset metadata
  - Falls back to individual API calls if batch fetch fails
- **Impact**:
  - Small deployments (10 volumes): Modest improvement due to batch fetch overhead
  - Large deployments (100+ volumes): Dramatic improvement as N+1 elimination fully realized
  - API efficiency: Changed from O(n) API calls to O(1) API call
  - Web UI responsiveness: Storage views load 7.5x faster for large environments
  - Reduced TrueNAS API load: 87% fewer API calls during list operations

#### **iSCSI Snapshot Deletion Optimization**
- **Brought iSCSI to parity with NVMe recursive deletion** - Consistent ~3 second deletion regardless of snapshot count
  - Previously: Sequential snapshot deletion loop (50+ API calls for volumes with many snapshots)
  - Now: Single recursive deletion (`recursive => true` flag) deletes all snapshots atomically
  - Matches NVMe transport behavior (already optimized)
  - Eliminates 50+ API calls for volumes with 50+ snapshots

### ✨ **Code Quality Improvements**

#### **Normalizer Utility Extraction**
- **Eliminated duplicate code across codebase** - Extracted `_normalize_value()` utility function
  - Removed 8 duplicate normalizer closures implementing identical logic
  - Single source of truth for TrueNAS API value normalization
  - Handles mixed response formats: scalars, hash with parsed/raw fields, undefined values
  - Bug fixes now apply consistently across all call sites
  - Reduced codebase by ~50 lines of duplicate code

#### **Performance Constants Documentation**
- **Documented timing parameters with rationale** - Defined 7 named constants for timeouts and delays
  - `UDEV_SETTLE_TIMEOUT_US` (250ms) - udev settle grace period
  - `DEVICE_READY_TIMEOUT_US` (100ms) - device availability check
  - `DEVICE_RESCAN_DELAY_US` (150ms) - device rescan stabilization
  - `DEVICE_SETTLE_DELAY_S` (1s) - post-connection/logout stabilization
  - `JOB_POLL_DELAY_S` (1s) - job status polling interval
  - `SNAPSHOT_DELETE_TIMEOUT_S` (15s) - snapshot deletion job timeout
  - `DATASET_DELETE_TIMEOUT_S` (20s) - dataset deletion job timeout
- **Impact**: Self-documenting code, easier performance tuning, prevents arbitrary value changes

###  **Technical Details**

**Modified functions**:
- `_list_images_iscsi()` (lines 3529-3592) - Batch dataset fetching with hash lookup
- `_list_images_nvme()` (lines 3650-3707) - Batch dataset fetching with hash lookup
- `_free_image_iscsi()` - Changed to recursive deletion (matches NVMe behavior)
- `_normalize_value()` (lines 35-44) - New utility function for API response normalization

**Performance testing**:
- Benchmark script created for automated testing with 10/50/100 volumes
- Baseline measurements established before optimization
- Post-optimization measurements confirmed 7.5x improvement for 100 volumes
- All tests validated on TrueNAS SCALE 25.10.0 with NVMe/TCP transport

###  **Real-World Impact**

| Deployment Size | Before | After | Time Saved | Speedup |
|-----------------|--------|-------|------------|---------|
| Small (10 VMs) | 2.3s | 1.7s | 0.6s | 1.4x |
| Medium (50 VMs) | 6.7s | 1.8s | 4.9s | 3.7x |
| Large (100 VMs) | 18.2s | 2.4s | 15.8s | 7.5x |
| Enterprise (1000 VMs) | ~182s (3min) | ~24s | ~158s (2.6min) | ~8x |

**User experience improvements**:
- Proxmox Web UI storage view refreshes 7.5x faster for large deployments
- Reduced risk of timeouts in large environments
- Lower API load on TrueNAS servers (87% fewer API calls)
- Better responsiveness during storage operations

---
 
I've added FIO benchmarking to the installer diagnostics menu.

Can someone else using NVMe/TCP on Proxmox (doesn't have to be with TrueNAS) show what some of their FIO benchmarks looks like? I've done a back-to-back test using iSCSI and then NVMe/TCP using the same exact storage pool on TrueNAS and the read performance is quite okay, the write performance is god awful, even in comparison to iSCSI.

iSCSI:
Code:
FIO installation:              ✓ fio-3.39
Storage configuration:         ✓ Valid (nvme-tcp mode)
Finding available VM ID:       ✓ Using VM ID 990
Allocating 10GB test volume:   ✓ tn-nvme:vol-fio-bench-1762485897-ns07f60098-5449-4d2e-812e-01ca586d78ca
Waiting for device (5s):       ✓ Ready
Detecting device path:         ✓ /dev/nvme3n20
Validating device is unused:   ✓ Device is safe to test

  Starting FIO benchmarks (30 tests, 25-30 minutes total)...

  Transport mode: nvme-tcp (testing QD=1, 16, 32, 64, 128)

  Sequential Read Bandwidth Tests: [1-5/30]
Queue Depth = 1:               ✓ 859.56 MB/s
Queue Depth = 16:              ✓ 3.26 GB/s
Queue Depth = 32:              ✓ 4.02 GB/s
Queue Depth = 64:              ✓ 3.91 GB/s
Queue Depth = 128:             ✓ 3.18 GB/s

  Sequential Write Bandwidth Tests: [6-10/30]
Queue Depth = 1:               ✓ 488.73 MB/s
Queue Depth = 16:              ✓ 363.00 MB/s
Queue Depth = 32:              ✓ 351.03 MB/s
Queue Depth = 64:              ✓ 354.49 MB/s
Queue Depth = 128:             ✓ 354.86 MB/s

  Random Read IOPS Tests: [11-15/30]
Queue Depth = 1:               ✓ 7,138 IOPS
Queue Depth = 16:              ✓ 66,523 IOPS
Queue Depth = 32:              ✓ 70,804 IOPS
Queue Depth = 64:              ✓ 68,579 IOPS
Queue Depth = 128:             ✓ 70,676 IOPS

  Random Write IOPS Tests: [16-20/30]
Queue Depth = 1:               ✓ 3,712 IOPS
Queue Depth = 16:              ✓ 3,592 IOPS
Queue Depth = 32:              ✓ 3,449 IOPS
Queue Depth = 64:              ✓ 3,422 IOPS
Queue Depth = 128:             ✓ 3,420 IOPS

  Random Read Latency Tests: [21-25/30]
Queue Depth = 1:               ✓ 127.05 µs
Queue Depth = 16:              ✓ 239.92 µs
Queue Depth = 32:              ✓ 450.72 µs
Queue Depth = 64:              ✓ 915.79 µs
Queue Depth = 128:             ✓ 1.88 ms

  Mixed 70/30 Workload Tests: [26-30/30]
Queue Depth = 1:               ✓ R: 4,615 / W: 1,974 IOPS
Queue Depth = 16:              ✓ R: 1,282 / W: 548 IOPS
Queue Depth = 32:              ✓ R: 2,193 / W: 932 IOPS
Queue Depth = 64:              ✓ R: 2,907 / W: 1,243 IOPS
Queue Depth = 128:             ✓ R: 835 / W: 358 IOPS

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  Benchmark Summary
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

  Total tests run: 30
  Completed: 30

  Top Performers:

  Sequential Read:                859.56 MB/s   (QD=1  )
  Sequential Write:               488.73 MB/s   (QD=1  )
  Random Read IOPS:               70,804 IOPS   (QD=32 )
  Random Write IOPS:               3,712 IOPS   (QD=1  )
  Lowest Latency:                     1.88 ms   (QD=128)


NVMe/TCP:
Code:
  FIO installation:              ✓ fio-3.39
  Storage configuration:         ✓ Valid (nvme-tcp mode)
  Finding available VM ID:       ✓ Using VM ID 990
  Allocating 10GB test volume:   ✓ tn-nvme:vol-fio-bench-1762485897-ns07f60098-5449-4d2e-812e-01ca586d78ca
  Waiting for device (5s):       ✓ Ready
  Detecting device path:         ✓ /dev/nvme3n20
  Validating device is unused:   ✓ Device is safe to test
 
    Starting FIO benchmarks (30 tests, 25-30 minutes total)...
 
    Transport mode: nvme-tcp (testing QD=1, 16, 32, 64, 128)
 
    Sequential Read Bandwidth Tests: [1-5/30]
  Queue Depth = 1:               ✓ 859.56 MB/s
  Queue Depth = 16:              ✓ 3.26 GB/s
  Queue Depth = 32:              ✓ 4.02 GB/s
  Queue Depth = 64:              ✓ 3.91 GB/s
  Queue Depth = 128:             ✓ 3.18 GB/s
 
    Sequential Write Bandwidth Tests: [6-10/30]
  Queue Depth = 1:               ✓ 488.73 MB/s
  Queue Depth = 16:              ✓ 363.00 MB/s
  Queue Depth = 32:              ✓ 351.03 MB/s
  Queue Depth = 64:              ✓ 354.49 MB/s
  Queue Depth = 128:             ✓ 354.86 MB/s
 
    Random Read IOPS Tests: [11-15/30]
  Queue Depth = 1:               ✓ 7,138 IOPS
  Queue Depth = 16:              ✓ 66,523 IOPS
  Queue Depth = 32:              ✓ 70,804 IOPS
  Queue Depth = 64:              ✓ 68,579 IOPS
  Queue Depth = 128:             ✓ 70,676 IOPS
 
    Random Write IOPS Tests: [16-20/30]
  Queue Depth = 1:               ✓ 3,712 IOPS
  Queue Depth = 16:              ✓ 3,592 IOPS
  Queue Depth = 32:              ✓ 3,449 IOPS
  Queue Depth = 64:              ✓ 3,422 IOPS
  Queue Depth = 128:             ✓ 3,420 IOPS
 
    Random Read Latency Tests: [21-25/30]
  Queue Depth = 1:               ✓ 127.05 µs
  Queue Depth = 16:              ✓ 239.92 µs
  Queue Depth = 32:              ✓ 450.72 µs
  Queue Depth = 64:              ✓ 915.79 µs
  Queue Depth = 128:             ✓ 1.88 ms
 
    Mixed 70/30 Workload Tests: [26-30/30]
  Queue Depth = 1:               ✓ R: 4,615 / W: 1,974 IOPS
  Queue Depth = 16:              ✓ R: 1,282 / W: 548 IOPS
  Queue Depth = 32:              ✓ R: 2,193 / W: 932 IOPS
  Queue Depth = 64:              ✓ R: 2,907 / W: 1,243 IOPS
  Queue Depth = 128:             ✓ R: 835 / W: 358 IOPS
 
  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
    Benchmark Summary
  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
 
    Total tests run: 30
    Completed: 30
 
    Top Performers:
 
    Sequential Read:                859.56 MB/s   (QD=1  )
    Sequential Write:               488.73 MB/s   (QD=1  )
    Random Read IOPS:               70,804 IOPS   (QD=32 )
    Random Write IOPS:               3,712 IOPS   (QD=1  )
    Lowest Latency:                     1.88 ms   (QD=128)

The random write IOPS on iSCSI are 16x better. Mind you, both are setup to use multipath, the same pool, the same NICs, and same Proxmox host testing from.
 
I've added FIO benchmarking to the installer diagnostics menu.

Can someone else using NVMe/TCP on Proxmox (doesn't have to be with TrueNAS) show what some of their FIO benchmarks looks like? I've done a back-to-back test using iSCSI and then NVMe/TCP using the same exact storage pool on TrueNAS and the read performance is quite okay, the write performance is god awful, even in comparison to iSCSI.

iSCSI:
Code:
FIO installation:              ✓ fio-3.39
Storage configuration:         ✓ Valid (nvme-tcp mode)
Finding available VM ID:       ✓ Using VM ID 990
Allocating 10GB test volume:   ✓ tn-nvme:vol-fio-bench-1762485897-ns07f60098-5449-4d2e-812e-01ca586d78ca
Waiting for device (5s):       ✓ Ready
Detecting device path:         ✓ /dev/nvme3n20
Validating device is unused:   ✓ Device is safe to test

  Starting FIO benchmarks (30 tests, 25-30 minutes total)...

  Transport mode: nvme-tcp (testing QD=1, 16, 32, 64, 128)

  Sequential Read Bandwidth Tests: [1-5/30]
Queue Depth = 1:               ✓ 859.56 MB/s
Queue Depth = 16:              ✓ 3.26 GB/s
Queue Depth = 32:              ✓ 4.02 GB/s
Queue Depth = 64:              ✓ 3.91 GB/s
Queue Depth = 128:             ✓ 3.18 GB/s

  Sequential Write Bandwidth Tests: [6-10/30]
Queue Depth = 1:               ✓ 488.73 MB/s
Queue Depth = 16:              ✓ 363.00 MB/s
Queue Depth = 32:              ✓ 351.03 MB/s
Queue Depth = 64:              ✓ 354.49 MB/s
Queue Depth = 128:             ✓ 354.86 MB/s

  Random Read IOPS Tests: [11-15/30]
Queue Depth = 1:               ✓ 7,138 IOPS
Queue Depth = 16:              ✓ 66,523 IOPS
Queue Depth = 32:              ✓ 70,804 IOPS
Queue Depth = 64:              ✓ 68,579 IOPS
Queue Depth = 128:             ✓ 70,676 IOPS

  Random Write IOPS Tests: [16-20/30]
Queue Depth = 1:               ✓ 3,712 IOPS
Queue Depth = 16:              ✓ 3,592 IOPS
Queue Depth = 32:              ✓ 3,449 IOPS
Queue Depth = 64:              ✓ 3,422 IOPS
Queue Depth = 128:             ✓ 3,420 IOPS

  Random Read Latency Tests: [21-25/30]
Queue Depth = 1:               ✓ 127.05 µs
Queue Depth = 16:              ✓ 239.92 µs
Queue Depth = 32:              ✓ 450.72 µs
Queue Depth = 64:              ✓ 915.79 µs
Queue Depth = 128:             ✓ 1.88 ms

  Mixed 70/30 Workload Tests: [26-30/30]
Queue Depth = 1:               ✓ R: 4,615 / W: 1,974 IOPS
Queue Depth = 16:              ✓ R: 1,282 / W: 548 IOPS
Queue Depth = 32:              ✓ R: 2,193 / W: 932 IOPS
Queue Depth = 64:              ✓ R: 2,907 / W: 1,243 IOPS
Queue Depth = 128:             ✓ R: 835 / W: 358 IOPS

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  Benchmark Summary
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

  Total tests run: 30
  Completed: 30

  Top Performers:

  Sequential Read:                859.56 MB/s   (QD=1  )
  Sequential Write:               488.73 MB/s   (QD=1  )
  Random Read IOPS:               70,804 IOPS   (QD=32 )
  Random Write IOPS:               3,712 IOPS   (QD=1  )
  Lowest Latency:                     1.88 ms   (QD=128)


NVMe/TCP:
Code:
  FIO installation:              ✓ fio-3.39
  Storage configuration:         ✓ Valid (nvme-tcp mode)
  Finding available VM ID:       ✓ Using VM ID 990
  Allocating 10GB test volume:   ✓ tn-nvme:vol-fio-bench-1762485897-ns07f60098-5449-4d2e-812e-01ca586d78ca
  Waiting for device (5s):       ✓ Ready
  Detecting device path:         ✓ /dev/nvme3n20
  Validating device is unused:   ✓ Device is safe to test
 
    Starting FIO benchmarks (30 tests, 25-30 minutes total)...
 
    Transport mode: nvme-tcp (testing QD=1, 16, 32, 64, 128)
 
    Sequential Read Bandwidth Tests: [1-5/30]
  Queue Depth = 1:               ✓ 859.56 MB/s
  Queue Depth = 16:              ✓ 3.26 GB/s
  Queue Depth = 32:              ✓ 4.02 GB/s
  Queue Depth = 64:              ✓ 3.91 GB/s
  Queue Depth = 128:             ✓ 3.18 GB/s
 
    Sequential Write Bandwidth Tests: [6-10/30]
  Queue Depth = 1:               ✓ 488.73 MB/s
  Queue Depth = 16:              ✓ 363.00 MB/s
  Queue Depth = 32:              ✓ 351.03 MB/s
  Queue Depth = 64:              ✓ 354.49 MB/s
  Queue Depth = 128:             ✓ 354.86 MB/s
 
    Random Read IOPS Tests: [11-15/30]
  Queue Depth = 1:               ✓ 7,138 IOPS
  Queue Depth = 16:              ✓ 66,523 IOPS
  Queue Depth = 32:              ✓ 70,804 IOPS
  Queue Depth = 64:              ✓ 68,579 IOPS
  Queue Depth = 128:             ✓ 70,676 IOPS
 
    Random Write IOPS Tests: [16-20/30]
  Queue Depth = 1:               ✓ 3,712 IOPS
  Queue Depth = 16:              ✓ 3,592 IOPS
  Queue Depth = 32:              ✓ 3,449 IOPS
  Queue Depth = 64:              ✓ 3,422 IOPS
  Queue Depth = 128:             ✓ 3,420 IOPS
 
    Random Read Latency Tests: [21-25/30]
  Queue Depth = 1:               ✓ 127.05 µs
  Queue Depth = 16:              ✓ 239.92 µs
  Queue Depth = 32:              ✓ 450.72 µs
  Queue Depth = 64:              ✓ 915.79 µs
  Queue Depth = 128:             ✓ 1.88 ms
 
    Mixed 70/30 Workload Tests: [26-30/30]
  Queue Depth = 1:               ✓ R: 4,615 / W: 1,974 IOPS
  Queue Depth = 16:              ✓ R: 1,282 / W: 548 IOPS
  Queue Depth = 32:              ✓ R: 2,193 / W: 932 IOPS
  Queue Depth = 64:              ✓ R: 2,907 / W: 1,243 IOPS
  Queue Depth = 128:             ✓ R: 835 / W: 358 IOPS
 
  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
    Benchmark Summary
  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
 
    Total tests run: 30
    Completed: 30
 
    Top Performers:
 
    Sequential Read:                859.56 MB/s   (QD=1  )
    Sequential Write:               488.73 MB/s   (QD=1  )
    Random Read IOPS:               70,804 IOPS   (QD=32 )
    Random Write IOPS:               3,712 IOPS   (QD=1  )
    Lowest Latency:                     1.88 ms   (QD=128)

The random write IOPS on iSCSI are 16x better. Mind you, both are setup to use multipath, the same pool, the same NICs, and same Proxmox host testing from.


Where is the installer diagnostics menu? I updated my test install with the alpha install script, I don't see it...
 
Here are the results on my test setup results, iscsi first, then NVVME same host, same storage - truenas:



FIO Storage Benchmark

Running benchmark on storage: truenas-storage

FIO installation: ✓ fio-3.39
Storage configuration: ✓ Valid (iscsi mode)
Finding available VM ID: ✓ Using VM ID 990
Allocating 10GB test volume: ✓ truenas-storage:vol-fio-bench-1762536334-lun11
Waiting for device (5s): ✓ Ready
Detecting device path: ✓ /dev/mapper/mpatho
Validating device is unused: ✓ Device is safe to test

Starting FIO benchmarks (30 tests, 25-30 minutes total)...

Transport mode: iscsi (testing QD=1, 16, 32, 64, 128)

Sequential Read Bandwidth Tests: [1-5/30]
Queue Depth = 1: ✓ 571.32 MB/s
Queue Depth = 16: ✓ 2.11 GB/s
Queue Depth = 32: ✓ 2.10 GB/s
Queue Depth = 64: ✓ 2.11 GB/s
Queue Depth = 128: ✓ 2.14 GB/s

Sequential Write Bandwidth Tests: [6-10/30]
Queue Depth = 1: ✓ 499.40 MB/s
Queue Depth = 16: ✓ 347.63 MB/s
Queue Depth = 32: ✓ 329.68 MB/s
Queue Depth = 64: ✓ 337.36 MB/s
Queue Depth = 128: ✓ 319.98 MB/s

Random Read IOPS Tests: [11-15/30]
Queue Depth = 1: ✓ 4,854 IOPS
Queue Depth = 16: ✓ 57,736 IOPS
Queue Depth = 32: ✓ 84,720 IOPS
Queue Depth = 64: ✓ 103,969 IOPS
Queue Depth = 128: ✓ 105,377 IOPS

Random Write IOPS Tests: [16-20/30]
Queue Depth = 1: ✓ 3,736 IOPS
Queue Depth = 16: ✓ 3,197 IOPS
Queue Depth = 32: ✓ 3,131 IOPS
Queue Depth = 64: ✓ 3,109 IOPS
Queue Depth = 128: ✓ 3,131 IOPS

Random Read Latency Tests: [21-25/30]
Queue Depth = 1: ✓ 208.30 µs
Queue Depth = 16: ✓ 275.67 µs
Queue Depth = 32: ✓ 377.25 µs
Queue Depth = 64: ✓ 632.02 µs
Queue Depth = 128: ✓ 1.24 ms

Mixed 70/30 Workload Tests: [26-30/30]
Queue Depth = 1: ✓ R: 3,552 / W: 1,519 IOPS
Queue Depth = 16: ✓ R: 8,905 / W: 3,822 IOPS
Queue Depth = 32: ✓ R: 7,232 / W: 3,110 IOPS
Queue Depth = 64: ✓ R: 7,154 / W: 3,077 IOPS
Queue Depth = 128: ✓ R: 7,151 / W: 3,076 IOPS

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Benchmark Summary
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Total tests run: 30
Completed: 30

Top Performers:

Sequential Read: 2.14 GB/s (QD=128)
Sequential Write: 499.40 MB/s (QD=1 )
Random Read IOPS: 105,377 IOPS (QD=128)
Random Write IOPS: 3,736 IOPS (QD=1 )
Lowest Latency: 208.30 µs (QD=1 )

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━


However, when I run the test on the NVME configured storage I get an error:

FIO Storage Benchmark

Running benchmark on storage: truenas-nvme

FIO installation: ✓ fio-3.39
Storage configuration: ✓ Valid (nvme-tcp mode)
Finding available VM ID: ✓ Using VM ID 990
Allocating 10GB test volume: ✗ Allocation failed
Failed to allocate volume: file /etc/pve/storage.cfg line 30 (section 'truenas-nvme') - unable to parse value of 'transport_mode': unknown property type
file /etc/pve/storage.cfg line 31 (section 'truenas-nvme') - unable to parse value of 'subsystem_nqn': unknown property type
file /etc/pve/storage.cfg line 39 (skip section 'truenas-nvme'): missing value for required option 'target_iqn'
Use of uninitialized value $type in hash element at /usr/share/perl5/PVE/Storage/Plugin.pm line 579, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
storage 'truenas-nvme' does not exist


Here is the current /etc/pve/storage.cfg
---------------------------------

dir: local
path /var/lib/vz
content iso,vztmpl,backup

lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images


truenasplugin: truenas-storage
api_host 192.168.239.54
api_key 1-3cJjvdbZKVRkegGzVcBTZLtbpb1Sj1qAaJe6oAoBAXh60jYx5srdeGqxLueYs1X5
dataset tank/proxmox
target_iqn iqn.2005-10.org.freenas.ctl:proxmox
api_insecure 1
shared 1
discovery_portal 192.168.239.54
portals 172.16.88.10:3260,172.16.99.10:3260
use_multipath 1
tn_sparse 1
content images
zvol_blocksize 128K



truenasplugin: truenas-nvme
api_host 172.16.88.10
api_key 1-3cJjvdbZKVRkegGzVcBTZLtbpb1Sj1qAaJe6oAoBAXh60jYx5srdeGqxLueYs1X5
transport_mode nvme-tcp
subsystem_nqn nqn.2011-06.com.truenas:uuid:32efe892-0016-47d5-995d-d82824d40832:proxmox-nvme
dataset tank/proxmox
discovery_portal 172.16.88.10:4420
portals 172.16.99.10:4420,172.16.88.10:4420
use_multipath 1
api_transport ws
api_insecure 1
content images
shared 1
 
However, when I run the test on the NVME configured storage I get an error:

FIO Storage Benchmark

Running benchmark on storage: truenas-nvme

FIO installation: ✓ fio-3.39
Storage configuration: ✓ Valid (nvme-tcp mode)
Finding available VM ID: ✓ Using VM ID 990
Allocating 10GB test volume: ✗ Allocation failed
Failed to allocate volume: file /etc/pve/storage.cfg line 30 (section 'truenas-nvme') - unable to parse value of 'transport_mode': unknown property type
file /etc/pve/storage.cfg line 31 (section 'truenas-nvme') - unable to parse value of 'subsystem_nqn': unknown property type
file /etc/pve/storage.cfg line 39 (skip section 'truenas-nvme'): missing value for required option 'target_iqn'
Use of uninitialized value $type in hash element at /usr/share/perl5/PVE/Storage/Plugin.pm line 579, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
Use of uninitialized value $type in string eq at /usr/share/perl5/PVE/Storage/Plugin.pm line 584, <DATA> line 960.
storage 'truenas-nvme' does not exist

Do you have the v1.1.3 version of the plugin installed? Those errors look like it's using an older version that did not have NVMe support. The 'transport_mode' property was added in newer version with NVMe.
 
Do you have the v1.1.3 version of the plugin installed? Those errors look like it's using an older version that did not have NVMe support. The 'transport_mode' property was added in newer version with NVMe.
apparently I was using version 1.0.6... I must have run the install script or something that reverted it, I will manually upgrade. And I will retest.
 
  • Like
Reactions: warlocksyno