Proxmox VE and ZFS over iSCSI on TrueNAS Scale: My steps to make it work.

Funny issue...
If I try to create or migrate a disk to the ISCSI storage, I receive the error

create full clone of drive virtio0 (store01:vm-252-disk-0) Warning: volblocksize (4096) is less than the default minimum block size (8192). To reduce wasted space a volblocksize of 8192 is recommended. Use of uninitialized value $target_id in concatenation (.) or string at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 787. Use of uninitialized value $target_id in numeric eq (==) at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 753. Use of uninitialized value $target_id in concatenation (.) or string at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 787. Use of uninitialized value $target_id in concatenation (.) or string at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 787. TASK ERROR: storage migration failed: Unable to find the target id for iqn.storage-backup.ctl:vmfs at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 259.

I've no idea where to have a look...
BTW: The file gets somehow created on the storage...
Reply to my own post:
  1. I now put "iqn.storage-backup.ctl" aas target name under global configuration at TrueNas and "vmfs" as target name under Targets.
  2. Now I can create and delte disks with creating and deleting vm's
  3. But - I'm still unable to move disks from local storage of a cluster node to the ISCSI box.
    The error is now:
    create full clone of drive virtio0 (store01:vm-252-disk-0)Warning: volblocksize (4096) is less than the default minimum block size (8192). To reduce wasted space a volblocksize of 8192 is recommended. iscsiadm: No session found. drive mirror is starting for drive-virtio0 drive-virtio0: Cancelling block job drive-virtio0: Done. TASK ERROR: storage migration failed: mirroring error: VM 252 qmp command 'drive-mirror' failed - iSCSI: Failed to connect to LUN : Failed to log in to target. Status: Authorization failure(514)
 
I recommend reporting this to the plugin author.

Unfortunately, this is likely a consequence of storage vendor introducing new benign and helpful (from their perspective) warnings to their CLI output that trip up the applications (plugin) that rely on the output.
CLI generally should be considered an unstable interface and avoided in automation if at all possible. The better way is to utilize API if those are available, but that requires a much higher level of time and resource investment.

The problem with the subsequent storage operation is likely due to original provisioning not being fully complete and left in "dangling" state. Things should, ideally, be rolled back on failure, so a clean retry is possible. Now your choice is either manually complete the provisioning task (iSCSI, etc) or remove the half-created volumes, fix the plugin and retry from start.

Good luck.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I recommend reporting this to the plugin author.

Unfortunately, this is likely a consequence of storage vendor introducing new benign and helpful (from their perspective) warnings to their CLI output that trip up the applications (plugin) that rely on the output.
CLI generally should be considered an unstable interface and avoided in automation if at all possible. The better way is to utilize API if those are available, but that requires a much higher level of time and resource investment.

The problem with the subsequent storage operation is likely due to original provisioning not being fully complete and left in "dangling" state. Things should, ideally, be rolled back on failure, so a clean retry is possible. Now your choice is either manually complete the provisioning task (iSCSI, etc) or remove the half-created volumes, fix the plugin and retry from start.

Good luck.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I've found it - like often was the problem in front of the screen...
I just forgot to add the initiaor name of the 2nd clluster node to the FreeNas list of allowed initors (initiator groups). Now it's working like a charm...
 
I've been struggling with this for quite a while. Where can i get some more debug logs to try and figure out what's gone wrong? This is the only error message that i can seem to track down.

I'm sure i'm doing something stupid wrong.


> First, on TrueNAS Scale, I have a ZFS dataset with a bunch of space. Initially I wanted to create a zvol under it and limit the space for VM's, but interestingly this doesn't work, you get the error "parent is not a filesystem." I dunno, but mapping it directly to the dataset works, so keep that in mind; either make it's own dataset or expect your vm drives to be in the root of the dataset next to other storage. Record the exact name of the dataset for later, visible under "Path" in the details for the dataset.

Does this mean not to use the pool as `nvme/vmstorage` just `vmstorage`?

Thanks for any assistance :)

```
Warning: volblocksize (4096) is less than the default minimum block size (8192).
To reduce wasted space a volblocksize of 8192 is recommended.
cannot create 'nvme/vmstorage/vm-100-disk-0': parent is not a filesystem
TASK ERROR: unable to create VM 100 - command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/10.0.0.116_id_rsa root@10.0.0.116 zfs create -b 4k -V 33554432k nvme/vmstorage/vm-100-disk-0' failed: exit code 1
```

1699188600511.png
 
Last edited:
I've been struggling with this for quite a while. Where can i get some more debug logs to try and figure out what's gone wrong? This is the only error message that i can seem to track down.

I'm sure i'm doing something stupid wrong.


> First, on TrueNAS Scale, I have a ZFS dataset with a bunch of space. Initially I wanted to create a zvol under it and limit the space for VM's, but interestingly this doesn't work, you get the error "parent is not a filesystem." I dunno, but mapping it directly to the dataset works, so keep that in mind; either make it's own dataset or expect your vm drives to be in the root of the dataset next to other storage. Record the exact name of the dataset for later, visible under "Path" in the details for the dataset.

Does this mean not to use the pool as `nvme/vmstorage` just `vmstorage`?

Thanks for any assistance :)

```
Warning: volblocksize (4096) is less than the default minimum block size (8192).
To reduce wasted space a volblocksize of 8192 is recommended.
cannot create 'nvme/vmstorage/vm-100-disk-0': parent is not a filesystem
TASK ERROR: unable to create VM 100 - command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/10.0.0.116_id_rsa root@10.0.0.116 zfs create -b 4k -V 33554432k nvme/vmstorage/vm-100-disk-0' failed: exit code 1
```

View attachment 57576
For the next person that runs into this situation... in the "Pool" section don't do `nvme/vmstorage`... just do 'nvme'
 
The pool field can be the path to any *dataset*, not a ZVOL. The plugin creates the zvols in the specified pool and manages the iscsi stuff.
 
Hi everyone
I am trying to get this running but for the life of me the wizard does not give me a freenas option in the provider dropdown. I followed the installation instructions on the github. I installed via apt, not via a git clone so I did not manually execute any patches or the like.

Does anyone know what I did wrong?
 
I have rebooted both nodes twice.
Then I would look at patches as listed in GitHub and then check the corresponding files in your installation, are the changes there?
It's possible the latest plugin release is not compatible with the latest PVE release. Generally, if the patch cannot be applied cleanly - it will fail.
I'd recommend reaching out to the plugin developer, either directly or via GitHub issues.

This is why we don't modify any of the PVE code in our plugin. The changes could be inexplicably lost on the next upgrade, and you will either need to adapt yourself or wait for the developer's update.

Good luck

PS - this is all assuming you properly installed things and did not miss a step or two.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
I will say I have successfully installed the plugin with a PVE 8.2.2 deployment, TrueNAS Core 13 U6.1 and have a ZFS over iSCSI store working. I'm able to do thin provisioning as well which is a nice feature. However I'm not able to take snapshots. I wish that was functional. I will do a little digging on it to see if it is resolvable.
 
Last edited:
  • Like
Reactions: matthew_70
I will say I have successfully installed the plugin with a PVE 8.2.2 deployment, TrueNAS Core 13 U6.1 and have a ZFS on iSCSI store working. I'm able to do thin provisioning as well which is a nice feature. However I'm not able to take snapshots. I wish that was functional. I will do a little digging on it to see if it is resolvable.
For those that come across this, my snapshotting issue with ZFS over iSCSI was caused by having a TPM State disk on the VM, which at the time of writing is RAW only. Removal of the TPM disk allowed snapshotting on the ZFS over iSCSI disk. The TPM State RAW limitation is currently tracked on Bugzilla here: https://bugzilla.proxmox.com/show_bug.cgi?id=4693
 
Last edited:
  • Like
Reactions: matthew_70
I searched around about doing ZFS over iSCSI with TrueNAS Scale, basically I don't find one is doing TrueNAS management and iSCSI on different networks, my setup is doing management at 10.0.50.0/24 subnet, and iSCSI on 192.168.10.0/24 and 192.168.20.0/24 subnet.

Is it supported by the TheGrandWazoo plugin? If yes, how should I configure it with the Proxmox GUI?

And also, my TrueNAS is at the latest version Dragonfish-24.04.2.1 now, I noticed that the allow root password login checkbox is missing from the SSH service configuration interface. I enabled SSH password login by editing root account, I checked allow password authentication at SSH service configuration, then I restarted SSH service on TrueNAS.

I tried using login on SSH with root and password, but it keeps prompting me to enter the password, anyone encountered that?
 
I searched around about doing ZFS over iSCSI with TrueNAS Scale, basically I don't find one is doing TrueNAS management and iSCSI on different networks, my setup is doing management at 10.0.50.0/24 subnet, and iSCSI on 192.168.10.0/24 and 192.168.20.0/24 subnet.

Is it supported by the TheGrandWazoo plugin? If yes, how should I configure it with the Proxmox GUI?

And also, my TrueNAS is at the latest version Dragonfish-24.04.2.1 now, I noticed that the allow root password login checkbox is missing from the SSH service configuration interface. I enabled SSH password login by editing root account, I checked allow password authentication at SSH service configuration, then I restarted SSH service on TrueNAS.

I tried using login on SSH with root and password, but it keeps prompting me to enter the password, anyone encountered that?
@wyss, not sure if you got this working yet.

Yes, my setup is similar with separate iSCSI and mgmt networks. Although, I have not found a convenient way to do multipath and deemed it not worth it in my deployment. I started on TrueNAS Core, but recently upgraded to Dragonfish. I did a bunch of testing on Dragonfish before upgrading. No issues with the plugin apart of needing to transition from '/' in the volume names to '-'.

That said, make sure to use the iSCSI IP address in the portal, but use the mgmt IP address for the API IPv4 Host. I presume you've tested already, so I'm saying to anyone else that may come across this, test connectivity with pings on each network from the PVE hosts to storage first to ensure no issues before trying to connect with the plugin. Also, make sure you have the TrueNAS target fully configured prior to the PVE side.

If you're still having issues, will you provide how your configuring it?
 
I have encountered this it looks like when the vm disk is big is having latency to do the copy but if you tried to do operation clone or move with 1gb disk size, the error is not showing.
 
Funny issue...
If I try to create or migrate a disk to the ISCSI storage, I receive the error


create full clone of drive virtio0 (store01:vm-252-disk-0)
Warning: volblocksize (4096) is less than the default minimum block size (8192).
To reduce wasted space a volblocksize of 8192 is recommended.
Use of uninitialized value $target_id in concatenation (.) or string at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 787.
Use of uninitialized value $target_id in numeric eq (==) at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 753.
Use of uninitialized value $target_id in concatenation (.) or string at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 787.
Use of uninitialized value $target_id in concatenation (.) or string at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 787.
TASK ERROR: storage migration failed: Unable to find the target id for iqn.storage-backup.ctl:vmfs at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 259.


I've no idea where to have a look...
BTW: The file gets somehow created on the storage...
Regarding this issue is that you have not configured properly your storage on TRUENAS. You need to set your dataset as filesystem truenas and set your iscsi target to get the proper pool name you set on Truenas
 
  • Like
Reactions: Johannes S
@wyss, not sure if you got this working yet.

Yes, my setup is similar with separate iSCSI and mgmt networks. Although, I have not found a convenient way to do multipath and deemed it not worth it in my deployment. I started on TrueNAS Core, but recently upgraded to Dragonfish. I did a bunch of testing on Dragonfish before upgrading. No issues with the plugin apart of needing to transition from '/' in the volume names to '-'.

That said, make sure to use the iSCSI IP address in the portal, but use the mgmt IP address for the API IPv4 Host. I presume you've tested already, so I'm saying to anyone else that may come across this, test connectivity with pings on each network from the PVE hosts to storage first to ensure no issues before trying to connect with the plugin. Also, make sure you have the TrueNAS target fully configured prior to the PVE side.

If you're still having issues, will you provide how your configuring it?
Sorry for getting back late, had been quite busy in the last few months...

I redo the testing, and nearly got it to work, problems that I had problems for the last few miles.
NOTE: I am using the latest version of the plugin, as of this time.
  1. 4K ZFS block size is not working now, it will prompt you with the error below. That error goes away when I re-configured the storage with 16K block size.
    Code:
    Warning: volblocksize (4096) is less than the default minimum block size (16384).To reduce wasted space a volblocksize of 16384 is recommended.

  2. The plugin or the PVE host (I guess likely it is) doesn't like my TrueNAS doing HTTPS. I am using self-signed CA certs on the TrueNAS, I guess I need to upload the CA cert to make PVE trust it. Still looking for a way to do it, please share if you know about it.
    Code:
    TASK ERROR: unable to create VM 111 - Unable to connect to the FreeNAS API service at 'nas.home.local' using the 'https' protocol at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 380.
 
@spiffman192 ok, I figured the things out, the warning of #1 doesn't matter really, just a warning. The issue #2 is due to 2FA, I didn't realize it after long hours of testing and troubleshooting lol

For my initial question about the multipathing, likely it is not supported, found some old discussions here and here.
 
@spiffman192 ok, I figured the things out, the warning of #1 doesn't matter really, just a warning. The issue #2 is due to 2FA, I didn't realize it after long hours of testing and troubleshooting lol

For my initial question about the multipathing, likely it is not supported, found some old discussions here and here.
Yeah, the first thread on multipathing is what led me to abandon it originally too. Need to poke the QEMU devs to get it worked out. Good job on hammering through it and getting it working!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!