Proxmox VE and ZFS over iSCSI on TrueNAS Scale: My steps to make it work.

Funny issue...
If I try to create or migrate a disk to the ISCSI storage, I receive the error

create full clone of drive virtio0 (store01:vm-252-disk-0) Warning: volblocksize (4096) is less than the default minimum block size (8192). To reduce wasted space a volblocksize of 8192 is recommended. Use of uninitialized value $target_id in concatenation (.) or string at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 787. Use of uninitialized value $target_id in numeric eq (==) at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 753. Use of uninitialized value $target_id in concatenation (.) or string at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 787. Use of uninitialized value $target_id in concatenation (.) or string at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 787. TASK ERROR: storage migration failed: Unable to find the target id for iqn.storage-backup.ctl:vmfs at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 259.

I've no idea where to have a look...
BTW: The file gets somehow created on the storage...
Reply to my own post:
  1. I now put "iqn.storage-backup.ctl" aas target name under global configuration at TrueNas and "vmfs" as target name under Targets.
  2. Now I can create and delte disks with creating and deleting vm's
  3. But - I'm still unable to move disks from local storage of a cluster node to the ISCSI box.
    The error is now:
    create full clone of drive virtio0 (store01:vm-252-disk-0)Warning: volblocksize (4096) is less than the default minimum block size (8192). To reduce wasted space a volblocksize of 8192 is recommended. iscsiadm: No session found. drive mirror is starting for drive-virtio0 drive-virtio0: Cancelling block job drive-virtio0: Done. TASK ERROR: storage migration failed: mirroring error: VM 252 qmp command 'drive-mirror' failed - iSCSI: Failed to connect to LUN : Failed to log in to target. Status: Authorization failure(514)
 
I recommend reporting this to the plugin author.

Unfortunately, this is likely a consequence of storage vendor introducing new benign and helpful (from their perspective) warnings to their CLI output that trip up the applications (plugin) that rely on the output.
CLI generally should be considered an unstable interface and avoided in automation if at all possible. The better way is to utilize API if those are available, but that requires a much higher level of time and resource investment.

The problem with the subsequent storage operation is likely due to original provisioning not being fully complete and left in "dangling" state. Things should, ideally, be rolled back on failure, so a clean retry is possible. Now your choice is either manually complete the provisioning task (iSCSI, etc) or remove the half-created volumes, fix the plugin and retry from start.

Good luck.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I recommend reporting this to the plugin author.

Unfortunately, this is likely a consequence of storage vendor introducing new benign and helpful (from their perspective) warnings to their CLI output that trip up the applications (plugin) that rely on the output.
CLI generally should be considered an unstable interface and avoided in automation if at all possible. The better way is to utilize API if those are available, but that requires a much higher level of time and resource investment.

The problem with the subsequent storage operation is likely due to original provisioning not being fully complete and left in "dangling" state. Things should, ideally, be rolled back on failure, so a clean retry is possible. Now your choice is either manually complete the provisioning task (iSCSI, etc) or remove the half-created volumes, fix the plugin and retry from start.

Good luck.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I've found it - like often was the problem in front of the screen...
I just forgot to add the initiaor name of the 2nd clluster node to the FreeNas list of allowed initors (initiator groups). Now it's working like a charm...
 
I've been struggling with this for quite a while. Where can i get some more debug logs to try and figure out what's gone wrong? This is the only error message that i can seem to track down.

I'm sure i'm doing something stupid wrong.


> First, on TrueNAS Scale, I have a ZFS dataset with a bunch of space. Initially I wanted to create a zvol under it and limit the space for VM's, but interestingly this doesn't work, you get the error "parent is not a filesystem." I dunno, but mapping it directly to the dataset works, so keep that in mind; either make it's own dataset or expect your vm drives to be in the root of the dataset next to other storage. Record the exact name of the dataset for later, visible under "Path" in the details for the dataset.

Does this mean not to use the pool as `nvme/vmstorage` just `vmstorage`?

Thanks for any assistance :)

```
Warning: volblocksize (4096) is less than the default minimum block size (8192).
To reduce wasted space a volblocksize of 8192 is recommended.
cannot create 'nvme/vmstorage/vm-100-disk-0': parent is not a filesystem
TASK ERROR: unable to create VM 100 - command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/10.0.0.116_id_rsa root@10.0.0.116 zfs create -b 4k -V 33554432k nvme/vmstorage/vm-100-disk-0' failed: exit code 1
```

1699188600511.png
 
Last edited:
I've been struggling with this for quite a while. Where can i get some more debug logs to try and figure out what's gone wrong? This is the only error message that i can seem to track down.

I'm sure i'm doing something stupid wrong.


> First, on TrueNAS Scale, I have a ZFS dataset with a bunch of space. Initially I wanted to create a zvol under it and limit the space for VM's, but interestingly this doesn't work, you get the error "parent is not a filesystem." I dunno, but mapping it directly to the dataset works, so keep that in mind; either make it's own dataset or expect your vm drives to be in the root of the dataset next to other storage. Record the exact name of the dataset for later, visible under "Path" in the details for the dataset.

Does this mean not to use the pool as `nvme/vmstorage` just `vmstorage`?

Thanks for any assistance :)

```
Warning: volblocksize (4096) is less than the default minimum block size (8192).
To reduce wasted space a volblocksize of 8192 is recommended.
cannot create 'nvme/vmstorage/vm-100-disk-0': parent is not a filesystem
TASK ERROR: unable to create VM 100 - command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/10.0.0.116_id_rsa root@10.0.0.116 zfs create -b 4k -V 33554432k nvme/vmstorage/vm-100-disk-0' failed: exit code 1
```

View attachment 57576
For the next person that runs into this situation... in the "Pool" section don't do `nvme/vmstorage`... just do 'nvme'
 
The pool field can be the path to any *dataset*, not a ZVOL. The plugin creates the zvols in the specified pool and manages the iscsi stuff.
 
Hi everyone
I am trying to get this running but for the life of me the wizard does not give me a freenas option in the provider dropdown. I followed the installation instructions on the github. I installed via apt, not via a git clone so I did not manually execute any patches or the like.

Does anyone know what I did wrong?
 
I have rebooted both nodes twice.
Then I would look at patches as listed in github and then check corresponding files in your installation, are the changes there?
Its possible the latest plugin release is not compatible with latest PVE release. Generally if the patch cannot be applied cleanly - it will fail.
I'd recommend reaching out to plugin developer, either directly or via github issues.

This is why we dont modify any of the PVE code in our plugin. The changes could be inexplicably lost on next upgrade, and you will either need to adapt yourself or wait for developer's update.

Good luck

PS - this is all assuming you properly installed things and did not miss an error or two.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!