TrueNAS Storage Plugin

@warlocksyno @curruscanis Have either of you notice any irregularity viewing the disks in the UI (pve>disks in the menu)? I get communication failures but not all hosts are effected. i.e. Connection refused (595), Connection timed out (596)
 
@warlocksyno @curruscanis Have either of you notice any irregularity viewing the disks in the UI (pve>disks in the menu)? I get communication failures but not all hosts are effected. i.e. Connection refused (595), Connection timed out (596)

Can you check to see if your pvestatd service in Proxmox is crashing? It's probably that. I have a branch being tested by @curruscanis at the moment to see if it fixes a similar issue.
 
Can you check to see if your pvestatd service in Proxmox is crashing? It's probably that. I have a branch being tested by @curruscanis at the moment to see if it fixes a similar issue.
I had not checked that status and I moved all of my VMs to an NFS share on the same box... (I had a hardware issue to resolve, used = unstable :D ) no issues since... now that I have repaired all the HDWare issues. I will go back to testing. ATM, I am using the 1.1.7 version and will move up ASAP.
 
Can you check to see if your pvestatd service in Proxmox is crashing? It's probably that. I have a branch being tested by @curruscanis at the moment to see if it fixes a similar issue.

d8P
d888888P
?88' 88bd88b?88 d8P d8888b 88bd88b d888b8b .d888b,
88P 88P' `d88 88 d8b_,dP 88P' ?8bd8P' ?88 ?8b,
88b d88 ?8( d88 88b d88 88P88b ,88b `?8b
`?8b d88' `?88P'?8b`?888P'd88' 88b`?88P'`88b`?888P'

d8b d8,
88P `8P
d88
?88,.d88b,888 ?88 d8P d888b8b 88b 88bd88b
`?88' ?88?88 d88 88 d8P' ?88 88P 88P' ?8b
88b d8P 88b ?8( d88 88b ,88b d88 d88 88P
888888P' 88b`?88P'?8b`?88P'`88bd88' d88' 88b
88P' )88
d88 ,88P For Proxmox VE
?8P `?8888P


TrueNAS Plugin v1.2.4 - Installed
---------------------------------

p.s. will this ever support "snapshots" ?
 
Last edited:
d8P
d888888P
?88' 88bd88b?88 d8P d8888b 88bd88b d888b8b .d888b,
88P 88P' `d88 88 d8b_,dP 88P' ?8bd8P' ?88 ?8b,
88b d88 ?8( d88 88b d88 88P88b ,88b `?8b
`?8b d88' `?88P'?8b`?888P'd88' 88b`?88P'`88b`?888P'

d8b d8,
88P `8P
d88
?88,.d88b,888 ?88 d8P d888b8b 88b 88bd88b
`?88' ?88?88 d88 88 d8P' ?88 88P 88P' ?8b
88b d8P 88b ?8( d88 88b ,88b d88 d88 88P
888888P' 88b`?88P'?8b`?88P'`88bd88' d88' 88b
88P' )88
d88 ,88P For Proxmox VE
?8P `?8888P


TrueNAS Plugin v1.2.4 - Installed
---------------------------------

p.s. will this ever support "snapshots" ?

The plugin does support snapshots? To what are you refering too?

Feature Comparison​

Feature
Snapshots
VM State Snapshots (vmstate)
Clones
Thin Provisioning
Block-Level Performance
Shared Storage
Automatic Volume Management
Automatic Resize
Pre-flight Checks
Multi-path I/O
ZFS Compression
Container Storage
Backup Storage
ISO Storage
Raw Image Format
[th]
TrueNAS Plugin​
[/th][th]
Standard iSCSI​
[/th][th]
NFS​
[/th]​
[td]
✅
[/td][td]
⚠️
[/td][td]
⚠️
[/td]​
[td]
✅
[/td][td]
✅
[/td][td]
✅
[/td]​
[td]
✅
[/td][td]
⚠️
[/td][td]
⚠️
[/td]​
[td]
✅
[/td][td]
⚠️
[/td][td]
⚠️
[/td]​
[td]
✅
[/td][td]
✅
[/td][td]
❌
[/td]​
[td]
✅
[/td][td]
✅
[/td][td]
✅
[/td]​
[td]
✅
[/td][td]
❌
[/td][td]
❌
[/td]​
[td]
✅
[/td][td]
❌
[/td][td]
❌
[/td]​
[td]
✅
[/td][td]
❌
[/td][td]
❌
[/td]​
[td]
✅
[/td][td]
✅
[/td][td]
❌
[/td]​
[td]
✅
[/td][td]
❌
[/td][td]
❌
[/td]​
[td]
❌
[/td][td]
⚠️
[/td][td]
✅
[/td]​
[td]
❌
[/td][td]
❌
[/td][td]
✅
[/td]​
[td]
❌
[/td][td]
❌
[/td][td]
✅
[/td]​
[td]
✅
[/td][td]
✅
[/td][td]
✅
[/td]​
Legend: ✅ Native Support | ⚠️ Via Additional Layer | ❌ Not Supported
 
The plugin does support snapshots? To what are you refering too?


Feature Comparison​


Feature
Snapshots
VM State Snapshots (vmstate)
Clones
Thin Provisioning
Block-Level Performance
Shared Storage
Automatic Volume Management
Automatic Resize
Pre-flight Checks
Multi-path I/O
ZFS Compression
Container Storage
Backup Storage
ISO Storage
Raw Image Format

[th]
TrueNAS Plugin

[/th][th]
Standard iSCSI

[/th][th]
NFS

[/th]
[td]
✅

[/td][td]
⚠️

[/td][td]
⚠️

[/td]
[td]
✅

[/td][td]
✅

[/td][td]
✅

[/td]
[td]
✅

[/td][td]
⚠️

[/td][td]
⚠️

[/td]
[td]
✅

[/td][td]
⚠️

[/td][td]
⚠️

[/td]
[td]
✅

[/td][td]
✅

[/td][td]
❌

[/td]
[td]
✅

[/td][td]
✅

[/td][td]
✅

[/td]
[td]
✅

[/td][td]
❌

[/td][td]
❌

[/td]
[td]
✅

[/td][td]
❌

[/td][td]
❌

[/td]
[td]
✅

[/td][td]
❌

[/td][td]
❌

[/td]
[td]
✅

[/td][td]
✅

[/td][td]
❌

[/td]
[td]
✅

[/td][td]
❌

[/td][td]
❌

[/td]
[td]
❌

[/td][td]
⚠️

[/td][td]
✅

[/td]
[td]
❌

[/td][td]
❌

[/td][td]
✅

[/td]
[td]
❌

[/td][td]
❌

[/td][td]
✅

[/td]
[td]
✅

[/td][td]
✅

[/td][td]
✅

[/td]​

Legend: ✅ Native Support | ⚠️ Via Additional Layer | ❌ Not Supported
My bad, I was using "RAW" formatted disks ....
 
@warlocksyno In some cases, I can't migrate to the iscsi storage unless i use the raw format . The target format choice is greyed, i Screenshot From 2025-12-23 10-35-45.png
This might be just a UI, issueScreenshot From 2025-12-23 10-40-12.png
 
Last edited:
Hmm, it should ALWAYS be RAW. Since NVMe/TCP and iSCSI are exclusively block storage, the format should be RAW.
IMHO and I could be wrong, we should be able to change the format when leaving the plugin supplied storage and going to, i.e. a local zfs storage, to support qcow2 \snapshots.
 
I also changed my zvol_block size to 128k in my most recent testing as my sysetms will be for VM workloads... from previous posts since I was reading through the documentation on the plugin and it recomends that.


truenasplugin: truenas-iscsi
api_host 10.10.5.10
api_key 3-vVeJEyC29sGwNEjpCfVnRBjHOKei8F3TekksP6bbr1NlStP4OHQ48UVmXl2laDiI
dataset tank/proxmox-iscsi
target_iqn iqn.2005-10.org.freenas.ctl:proxmox-iscsi-1
api_insecure 1
shared 1
discovery_portal 10.10.5.10:3260
zvol_blocksize 128k
tn_sparse 1
use_multipath 1
portals 10.10.6.10:3260
content images

I'm following this thread with interest. Amazing work on this project.

I'm a bit curious about the documentation's reccomendation for 128k volblocksize. It's my understanding that KVM virtual machines running on ZFS can get better I/O with 64k.

Has anyone done any A/B testing on that?

EDIT:
IMHO and I could be wrong, we should be able to change the format when leaving the plugin supplied storage and going to, i.e. a local zfs storage, to support qcow2 \snapshots.

1767666012669.png

My local ZFS storage locks me to RAW files. This has been the expected behavior since at least PVE 8, I think.
EDIT: The Proxmox documentation indicates that any storage target that presents itself as a block device only uses raw disk images.
 
Last edited:
@warlocksyno just an update, I've been on branch "fix/websocket-fork-segfault" with 50+ vms using the storage.. this is my companies entire DEV environment. these are applications VMs with heavy DB queries (DBs in another cluster) My Devs haven't even noticed....
 
  • Like
Reactions: warlocksyno
@warlocksyno is there an expectation that we should be able to use the Proxmox VM replication?

There are two possible reasons for this failure.
1) the LUN name isn't recognize
2) the system doesn't think it has a destination volume

The error "no replicatable volumes found" in Proxmox typically occurs when attempting to set up VM replication and the system cannot identify suitable storage volumes for replication. One common cause is a mismatch between the virtual machine (VM) ID and the associated disk ID in the storage configuration.
For example, if a VM has the ID 103 but its disk is labeled as vm-102-disk-0, this inconsistency prevents replication from being properly configured.

To resolve this issue, ensure that the disk ID matches the VM ID. This can be done by renaming the ZFS volume using the command zfs rename when the VM is shut down.
After renaming the volume (e.g., from vm-102-disk-0 to vm-103-disk-0), update the VM configuration either via the command qm set 103 --scsi0 local-zfs:vm-103-disk-0 or by manually editing the configuration file at /etc/pve/qemu-server/103.conf.

I was hoping that in the future I could deploy a second TrueNAS and target it for replication.
 

Attachments

  • Screenshot From 2026-01-06 13-48-42.png
    Screenshot From 2026-01-06 13-48-42.png
    54.2 KB · Views: 5
Last edited:
@warlocksyno is there an expectation that we should be able to use the Proxmox VM replication?

I was hoping that in the future I could deploy a second TrueNAS and target it for replication.

Interesting... Can you give me a run down of what you are trying to achieve in this setup and with what machines/nodes? I'll see if I can get a similar setup going and get a fix started.
 
II'm a bit curious about the documentation's reccomendation for 128k volblocksize. It's my understanding that KVM virtual machines running on ZFS can get better I/O with 64k.

It depends on the workload and how your TrueNAS pool is setup surprisingly. I'll be talking with TrueNAS/iXSystems to see what their recommendation on this is, since AFAIK, it's heavily depended on your disk layout + workload.
 
Interesting... Can you give me a run down of what you are trying to achieve in this setup and with what machines/nodes? I'll see if I can get a similar setup going and get a fix started.
This is my current ENV. I have one dedicated TrueNAS host exporting both ISCSI and NFS. This is a common VM configuration.
I could be wrong but, if the storage is "Shared" and I have another storage considered "Shared" I should be able to use the Proxmox "Replication" to create a standby "Disk" for HA \ DR. At the very least a copy I can manually reconfigure the VM to use on the second "Shared" storage.
 

Attachments

  • Screenshot From 2026-01-09 12-02-50.png
    Screenshot From 2026-01-09 12-02-50.png
    130.7 KB · Views: 2
  • Screenshot From 2026-01-09 12-05-15.png
    Screenshot From 2026-01-09 12-05-15.png
    107.4 KB · Views: 2
Last edited:
This is my current ENV. I have one dedicated TrueNAS host exporting both ISCSI and NFS. This is a common VM configuration.
I could be wrong but, if the storage is "Shared" and I have another storage considered "Shared" I should be able to use the Proxmox "Replication" to create a standby "Disk" for HA. At the very least a copy I can manually reconfigure the VM to use on the second "Shared" storage.
I moved my repo to alpha and updated.... it still shows "orphans" but won't clean them...
 

Attachments

  • Screenshot From 2026-01-09 12-32-42.png
    Screenshot From 2026-01-09 12-32-42.png
    30.4 KB · Views: 2
standby "Disk" for HA \ DR. At the very least a copy I can manually reconfigure the VM to use on the second "Shared" storage.

Within the cluster itself? If so, I don't think you need the replication feature. Replication is for if you have non-shared storage between the nodes. Such as local storage, like a ZFS pool on each host. You can then use the replication feature to have the nodes replication the data to the other nodes so they all have a 1:1 copy of the data. The VM config is always synced between all cluster members though, so that is already covered by default.

If all 3 nodes are setup to use the plugin correctly, with HA turned on, if the HA manager decides the VM needs to be on a different node for any reason, that node should automatically connect the LUNs as needed.

If you are wanting to duplicate the data onto another storage, I'm not aware of Proxmox being able to do that natively unless you setup the other storage as a backup destination, which then you could do a live-restore from.

But maybe I'm misunderstanding the intent?

As for multipathing, you will have to the portal on TrueNAS shared out to more than one of the network interfaces and since TrueNAS does not allow two or more interfaces to share the same subnet, you would also have to have the same subnets setup on the Proxmox.

For instance:

Code:
┌─────────────┐
│  TrueNAS    │
│             │
│ eth0: 10.15.14.172/23 ────────┐
│ eth1: 10.30.30.2/24 ───────┐  │
└─────────────┘              │  │
                             │  │
                  ┌──────────┘  │
                  │  ┌──────────┘
                  │  │
               ┌──▼──▼────────┐
               │  Switch(es)  │
               └──┬──┬────────┘
                  │  │
                  │  └──────────┐
                  └──────────┐  │
                             │  │
┌─────────────┐              │  │
│  Proxmox    │              │  │
│             │              │  │
│ eth2: 10.15.14.89/23 ◄─────┘  │
│ eth3: 10.30.30.3/24 ◄─────────┘
└─────────────┘

From:
https://github.com/WarlockSyno/True...a/wiki/Advanced-Features.md#multipath-io-mpio
 
Last edited: