Proxmox VE and ZFS over iSCSI on TrueNAS Scale: My steps to make it work.

surfrock66

Active Member
Feb 10, 2020
40
8
28
41
I found a lot of searching and testing with people struggling on this, but I was able to make this work. I wanted to document my exact steps in the hope it helps someone else. I have a VM povisioned, working, with snapshots and migrations.

First, on TrueNAS Scale, I have a ZFS dataset with a bunch of space. Initially I wanted to create a zvol under it and limit the space for VM's, but interestingly this doesn't work, you get the error "parent is not a filesystem." I dunno, but mapping it directly to the dataset works, so keep that in mind; either make it's own dataset or expect your vm drives to be in the root of the dataset next to other storage. Record the exact name of the dataset for later, visible under "Path" in the details for the dataset.

Then, go to Shares, Block (iSCSI). Most of my config was sourced from here https://www.truenas.com/blog/iscsi-shares-on-truenas-freenas/ If you go through the Wizard, give the share a "Name", the extent type is "Device", The device should be your dataset which will be in the dropdown, the sharing platform should be "Modern OS: Extent block size 4k, TPC enabled, no Xen compat mode, SSD speed". I created the target outside of this wizard which you can see about just below.

First, click Add/Configure, where you'll get a series of tabs. On the "Target Global Configuration' tab, I created an iqn for Base Name; there is documentation out there for what it should be, but mine is "iqn.YYYY-MM.tld.domain.subdomain.nashost" given my home domain is subdomain.domain.com and YYYY-MM is the month I made this. nashost is my nas hostname. I set the available space threshold to 15%, and the port is default at 3260.

Under Portals, I added a new portal. The name is "nashostname Portal", and I added the IP for storage on the device. I have nothing for discovery authentication method or group.

For the Initiators Groups, we need the IQN for the proxmox servers. I couldn't find this in the GUI, but if you "cat/etc/iscsi/initiatorname.iscsi" you'll get it from the proxmox host. I added the initiator and gave it the name of the proxmox host.

I have nothing in Authorized Access, but if you wanted to use CHAP you would set it up there.

I created a new target, just named "target01." I don't have a bigger subnet for my proxmox hosts so I added "Authorized Networks" which are the Proxmox Host IP's with a /32 CIDR. For later, when plugging this into Proxmox, the IQN will be the IQN Base, then :, then the target, so "iqn.YYYY-MM.tld.domain.subdomain.nashost:target01". You need to add an iSCSI group; You'll select the portal group ID you created in the dropdown, and the Initiator Group ID you created before from the dropdown.

We do NOT create any Extents, they will be made by the API Plugin.

We do NOT create any Associated Targets, they will be associated when we create a drive by the API Plugin.

Proxmox will need to communicate with TrueNAS over ssh, so you will need to ensure SSH is enabled, and login as root is enabled. In my case, I run SSH on a non-standard port, which is an added complication which I will address later which may not impact most people.

I use the freenas-proxmox plugin by TheGrandWazoo to get this working. The installation instructions are here: https://github.com/TheGrandWazoo/freenas-proxmox Once you install that, then reboot or restart the service, you should see a new provider when adding ZFS over iSCSI storage to the Datacenter in Proxmox:

1680733276982.png

I liked the steps in this guide about doing the SSH key exchange: https://xinux.net/index.php/Proxmox_iscsi_over_zfs_with_freenas

Code:
portal_ip=10.*.*.* (Don't use asterisks, plug in your IP)
mkdir /etc/pve/priv/zfs
ssh-keygen -f /etc/pve/priv/zfs/$portal_ip_id_rsa
ssh-copy-id -i /etc/pve/priv/zfs/$portal_ip_id_rsa.pub root@$portal_ip

In my case, since my truenas uses ssh on an alt port, I added "-p ### " as a param as well.

Additionally, in order to support doing this on a non-standard port, you need to edit "~/.ssh/config" to include the following 2 lines with your NAS IP and SSH Port...if you use the standard port, ignore this:

Code:
Host 10.*.*.*
    Port ###

Ok, back to the UI for adding a new ZFS over iSCSI. Name it whatever. The portal is the IP of the NAS. The pool needs to be the name of the Path of the dataset, recorded above. The target needs to be the IQN Base:target as indicated above that you created in TrueNAS. The API Username is root. The iSCSI provider is freenas, I did NOT use thin provisioning, I do have write cache, the API IPv4 Host is the IP of the NAS, and the API password is the root password of the TrueNAS box. Once that is done, the LUN shows up in the sidebar on the left and in the storage list for the datacenter.

And that's it, next time I create a VM, the nas is available as a storage backing device. If I go back to TrueNAS and look in storage at the dataset, I can see the vm volume there, and if I go back to the iSCSI share, I now see the extent and Associated Targets. Seems to be working well, and if anyone has anything they would do differently, I'm open to feedback, but if people are struggling, this got me working.
 
Last edited:
I tested it with the same Plugin and got some errors while create vm disk and delete. How it looks now'?
Can u use this also for lxc? The docu say no.
 
Dunno, I'm not using LXC. All I know is my vm is happily running backed by the zfs datastore on truenas over iscsi:

1680761062468.png
 
@surfrock66 Thanks for the detailed write up, I followed the exact steps but I am getting the following error:

Code:
Apr 23 00:45:08 proxmox pvedaemon[171829]: unable to create VM 112 - Unable to connect to the FreeNAS API service at '192.168.1.108' using the 'http' protocol at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 380.
Apr 23 00:45:08 proxmox pvedaemon[49596]: <root@pam> end task UPID:proxmox:00029F35:19B48715:6444B7D3:qmcreate:112:root@pam: unable to create VM 112 - Unable to connect to the FreeNAS API service at '192.168.1.108' using the 'http' protocol at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 380.

Any tips or insight if you ran into this same issue?
 
I get to this point but the pve box seems to hang with no error. Not sure what's wrong there. I have set allow root login in truenas scale and set a password for the root use (using the admin user setup with truenas). I can manually ssh into truenas with root no issue.
ssh-copy-id -i /etc/pve/priv/zfs/$portal_ip_id_rsa.pub root@$portal_ip

EDIT: nvm ... i'm an idiot ... had a small typo
 
Last edited:
  • Like
Reactions: surfrock66
I just seem to have set this up successfully, though I allocated (non-thin/sparse) 1 TiB in TrueNAS, and it's showing as 1.12 TB used of 2.96 TB total in Proxmox, and I have no idea why.

Edit: Upon closer reading, I realized you did not use the wizard, which forces you to create or target a zvol, and I had pointed the target at a zvol. Pointing it at the dataset shows 93 KB used out of 1.85 TB, my actual dataset size. That makes sense.

Regarding the error you saw around 'parent is not a filesystem', if you look at the failed ssh command, it's trying to state the zvol location within the pool as a dataset, and create a disk zvol child under the parent which, in this case, is a zvol, which can't work. Makes sense it would error out. See here near the top: "For each guest disk it creates a ZVOL and, exports it as iSCSI LUN."
 

Attachments

  • truenas-iscsi.PNG
    truenas-iscsi.PNG
    14.2 KB · Views: 160
  • proxmox-iscsi.PNG
    proxmox-iscsi.PNG
    30.1 KB · Views: 140
Last edited:
  • Like
Reactions: cyber13
Thank you for the guide. I have successfully setup, but when I try to clone a VM, it keeps spitting out this error and the speed is super slow. The only difference is I'm using thin provision with 8k block
Code:
qemu-img: iSCSI GET_LBA_STATUS failed at lba 0: SENSE KEY:ILLEGAL_REQUEST(5) ASCQ:INVALID_FIELD_IN_CDB(0x2400)
 
Last edited:
  • Like
Reactions: eugenevdm
Am I supposed to create a dataset or a ZVOL? I tried a ZVOL and got the "parent is not a filesystem" error (yeah, the whole point of ZFS over iSCSI is that ZFS is put on top of the iSCSI block device) and I tried a dataset but a dataset was no good. There's missing information here.
 
  • Like
Reactions: Bobbbb
yeah, the whole point of ZFS over iSCSI is that ZFS is put on top of the iSCSI block device
I don't know the answer to your main question but you got the above reversed. The iSCSI export is created on top of a ZFS slice.

What you do with the resulting raw iSCSI lun is up to you. Could be a ZFS or NTFS. That's done inside the VM and is not related to plugin.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: Bobbbb
I don't know the answer to your main question but you got the above reversed. The iSCSI export is created on top of a ZFS slice.

What you do with the resulting raw iSCSI lun is up to you. Could be a ZFS or NTFS. That's done inside the VM and is not related to plugin.



Blockbridge
I wouldn't call that ZFS over iSCSI because ZFS isn't being overlaid iSCSI. In any case I did manage to get it working. I'll have to make a write-up on how to get this mess (no offence intended to surfrock66) working but the thing that I got wrong was creating an extent, you don't need to do that. Set up a child dataset wherever you like (in my case nvme/proxmox) and a target in the iscsi panel with a portal group ID and an initiator group ID.

On the proxmox side portal is the IP address of truenas, pool is nvme/proxmox, ZFS Block Size is 8k, target is the IQN base name + the target name (something like iqn.2023-09.site.untouchedwagons.storage:proxmox), enter the api username, toggle Thin provision and enter your API password twice.

I'm not sure how the ZFS block size interacts with the block size of my pool on the truenas-side. I'll see if I can make a detailed guide maybe Saturday or some time next week.

How do I turn off emoticons?

[Edit] Okay I did some testing using FIO and I get about 60% of the speed writing to a VM disk stored an 4 disk nvme ssd pool provided over iSCSI compared to a 2 disk SAD SSD pool in sequential tests. I don't think ZFS over iSCSI is going to be practical for me.
 
Last edited:
Thank you for the guide. I have successfully setup, but when I try to clone a VM, it keeps spitting out this error and the speed is super slow. The only difference is I'm using thin provision with 8k block
Code:
qemu-img: iSCSI GET_LBA_STATUS failed at lba 0: SENSE KEY:ILLEGAL_REQUEST(5) ASCQ:INVALID_FIELD_IN_CDB(0x2400)
I found this: https://sourceforge.net/p/scst/mailman/message/35242474/ I'm not sure if it's relevant as I don't really understand what Vladislav B is saying but from what I can guess qemu is asking the iscsi target about the status of the block at LBA 0 and the target responds INVALID_FIELD_IN_CDB because it cannot give an answer, sort of like asking a blind man what the colour of your shirt is.
 
  • Like
Reactions: teacup91
I wouldn't call that ZFS over iSCSI because ZFS isn't being overlaid iSCSI
The plugin has been part of PVE for many years and the name is stuck at this point. Its too late to rename it at this point, especially given very narrow applicability.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I found this: https://sourceforge.net/p/scst/mailman/message/35242474/ I'm not sure if it's relevant as I don't really understand what Vladislav B is saying but from what I can guess qemu is asking the iscsi target about the status of the block at LBA 0 and the target responds INVALID_FIELD_IN_CDB because it cannot give an answer, sort of like asking a blind man what the colour of your shirt is.
Thank you for your response. Do you experience slow speed while cloning (I'm testing with 1GBE)? Provision is fast but it's slow when I copy or clone the drive.
 
Thank you for your response. Do you experience slow speed while cloning (I'm testing with 1GBE)? Provision is fast but it's slow when I copy or clone the drive.

Yeah any operations over iSCSI was quite slow for me, even over a 10g link. NFS is much faster.
 
Funny issue...
If I try to create or migrate a disk to the ISCSI storage, I receive the error

create full clone of drive virtio0 (store01:vm-252-disk-0) Warning: volblocksize (4096) is less than the default minimum block size (8192). To reduce wasted space a volblocksize of 8192 is recommended. Use of uninitialized value $target_id in concatenation (.) or string at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 787. Use of uninitialized value $target_id in numeric eq (==) at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 753. Use of uninitialized value $target_id in concatenation (.) or string at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 787. Use of uninitialized value $target_id in concatenation (.) or string at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 787. TASK ERROR: storage migration failed: Unable to find the target id for iqn.storage-backup.ctl:vmfs at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 259.

I've no idea where to have a look...
BTW: The file gets somehow created on the storage...
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!