[TUTORIAL] Guide: Setup ZFS-over-iSCSI with PVE 5x and FreeNAS 11+

anatoxin

New Member
May 24, 2019
1
7
3
43
Hi.

This question has been asked many times and the answers have been good but fragmented, and rarly covers the FreeNAS setup part. I will not go in on how to install FreeNAS or howto setup ZFS pools but i will cover whats required to make this actually works.

So lets begin.

First we need to install some patches on the proxmox nodes since FreeNAS dosn't use istgt provider anymore.
The patches creates a new iSCSI provider called FreeNAS-API in the webgui.

upload_2019-5-24_9-52-31.png
Thanks to GrandWazoo who made these patches.

First of all we need to setup SSH keys to the freenas box, the SSH connection needs to be on the same subnet as the iSCSI Portal, so if you are like me and have a separate VLAN and subnet for iSCSI the SSH connection needs to be established to the iSCSI Portal IP and not to the LAN/Management IP on the FreeNAS box.
The SSH connection is only used to list the ZFS pools

1. Lets create the SSH keys on the proxmox boxes. (The IP must match your iSCSI Portal IP)
You only need to create the keys on one node if they are clustered as the keys will replicate to the other nodes.

mkdir /etc/pve/priv/zfs
ssh-keygen -f /etc/pve/priv/zfs/192.168.1.1_id_rsa
ssh-copy-id -i /etc/pve/priv/zfs/192.168.1.1_id_rsa.pub root@192.168.1.1

1. Enable "Log in as root with password" under Services -> SSH on the FreeNAS box.

2. Make an SSH connection from every node to the iSCSI Portal IP

ssh -i /etc/pve/priv/zfs/192.168.1.1_id_rsa root@192.168.1.1

3. Install the REST client on every node

apt-get install librest-client-perl git

4. Download the patches on every proxmox node

git clone "link to the patches"
Note: as a new forum user im not allowed to paste links, take a look at GrandWazoo page on github
for the full path to the patches.

5. Install the patches on every proxmox node

cd freenas-proxmox
patch -b /usr/share/pve-manager/js/pvemanagerlib.js < pve-manager/js/pvemanagerlib.js.patch
patch -b /usr/share/perl5/PVE/Storage/ZFSPlugin.pm < perl5/PVE/Storage/ZFSPlugin.pm.patch
patch -b /usr/share/pve-docs/api-viewer/apidoc.js < pve-docs/api-viewer/apidoc.js.patch



cp perl5/PVE/Storage/LunCmd/FreeNAS.pm /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm

6. Restart the PVE services.

systemctl restart pvedaemon
systemctl restart pveproxy
systemctl restart pvestatd

Logout from PVE webgui and clean the browser cache, and login again.
Now FreeNAS-API should we available as a iSCSI provider

7. Create a iSCSI target on the FreeNAS box.
You dont need to create any extents as the FreeNAS-API plugin will do this automatically when the drive is created on the VM.

8. Setup ZFS Over iSCSI i Proxmox GUI. choose FreeNAS-API as provider.

ID: Whatever you want
Portal: iSCSI portal IP on the freenas box
Pool: Your ZFS pool name on the freenas box (this needs to be the root pool and not an extent as the VM disks will be created as own zvols directly on the pool)
ZFS Block Size: 4k
Target: IQN on the FreeNAS box and target ID
ex "qn.2005-10.org.freenas.ctl:proxmox"
API use SSL: Unchecked
API Username: root
API IPv4 Host: iSCSI portal IP on the freenas box
API Password: root password on freenas box
Thin provision and Write cache is optional

Note: A pve-manager upgrade will replace the patched files so i suggest you create a bash script
and run it on every node after an upgrade.

I have a script in /root/freenas-proxmox folder that looks like this

patch -b /usr/share/pve-manager/js/pvemanagerlib.js < pve-manager/js/pvemanagerlib.js.patch
patch -b /usr/share/perl5/PVE/Storage/ZFSPlugin.pm < perl5/PVE/Storage/ZFSPlugin.pm.patch
patch -b /usr/share/pve-docs/api-viewer/apidoc.js < pve-docs/api-viewer/apidoc.js.patch

cp perl5/PVE/Storage/LunCmd/FreeNAS.pm /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm

systemctl restart pvedaemon
systemctl restart pveproxy
systemctl restart pvestatd


Hope this sums it up :)







 
Last edited:
The new target is titled CTL, so why not just title it the same way in the pve gui? And here is interestingly: is planned whether this patch in master branch?
 
Updates for the weary Proxmox/FreeNAS Internet traveler:

I am now running into an "iscsiadm: no session found." error which also says "iSCSI: Failed to connect to LUN : Failed to log in to target. Status: Authorization failure(514) ". If I find out how to fix that, I'll post.
 
Last edited:
  • Like
Reactions: Catwoolfii
Again, for the weary, FreeNAS-seeking-but-very-frustrated-because-no-one-actually-wrote-everything-up Proxmox user:

I believe I have solved the problem of the "iSCSI: Failed to connect to LUN : Failed to log in to target. Status: Authorization failure(514) " listed above. As a comprehensive addendum that solved my problems to the fine tutorial provided by the OP , please see the following:

  • Configure the FreeNAS target using documentation like the one found here: https://www.ixsystems.com/documentation/freenas/11.2-U6/FreeNAS-11.2-U6-User-Guide_screen.pdf. In this guide, you need to do sections 11.5.1, 11.5.2, 11.5.3, and 11.5.5 at a minimum to set up your target.

  • Do not set a password on the SSH key you generate unless you have a way to supply the password every time you need to use the key

  • On this line "Target: IQN on the FreeNAS box and target ID", that means the syntax needs to be of the following format:

    iqn.<iSCSI-ok name you can supply>.ctl:<Name of the target you created in FreeNAS> (see more information here: https://pubs.vmware.com/vsphere-50/index.jsp?topic=/com.vmware.vsphere.storage.doc_50/GUID-686D92B6-A2B2-4944-8718-F1B74F6A2C53.html)

    Example (using default FreeNAS IQN): iqn.2005-10.org.freenas.ctl:TargetName

  • For our cluster, we set the initiators our Group (section 11.5.3 in the FreeNAS guide) in Sharing / iSCSI / Initiators to 'ALL' but the network only to the /24 it should be accessed on; would be good to find a better way to do this

  • This tutorial, the GitHub page, and the Proxmox wiki do not discuss this critical step, which I finally figured out: you have to log into the target! I did not find a way to do this through the ProxMox GUI; I had to use the command found below (Many thanks to our friends at Fibre Village, who provided shared command here with a direct explanation of what it does: http://fibrevillage.com/storage/205-iscsiadm-command-examples-on-linux)

    iscsiadm --mode discovery --op update --type sendtargets --portal <IP Address of FreeNAS> //must discover it first
    iscsiadm -m node -T <your target, which starts with iqn as discussed above) -p <FreeNAS ipaddress> -l #<---- this is a lower case 'L'


    This command is equivalent to
    /etc/init.d/iscsi start
    Which calls
    /sbin/iscsiadm -m node --loginall=automatic

    Verify with:

    iscsiadm -m node //shows discovered iSCSI node
    iscsiadm -m session //shows active logged-in sessions, which will then allow you to power on your VM's
On this last step, I seek the community's wisdom: how does one configure Proxmox to log in automatically? Would it be just the same as having a standard Debian 10 instance log into an iSCSI target automatically, or is there a best practice?

Many thanks to Grand Wazoo at GitHub and Anatoxin for making this happen!
 
Last edited:
  • Like
Reactions: Oxyon84
Hi,
Firstly, I would like to thank you for documenting this (very) nice proxmox feature/plugin. Given your documentation, I am now progressing toward the final target :)
However, I have to cope with issues which let me think that my configuration is not clean enough...

When I try to create/clone a VM, I get the following error message :
TASK ERROR: unable to create VM 106 - error with cfs lock 'storage-FreeNAS-SAN': Unable to find the target id for iqn.2005-10.org.freenas.ctl:proxmox-storage at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 149.

The VM is not created, but the related raw device is (under Storage/Pools section). When I try to delete it, I get this message :
Could not find lu_name for zvol vm-106-disk-0 at /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 118. (500)

Any idea ?
I precise I have not fulfilled 'extents' and 'associated targets' FreeNAS panels : I assume these are automatically populated by the plugin.

Regards,
Frederic
 
All,

First off, thank you for the helpful write-up. I was able to get FreeNAS set up, and the iSCSI connection initiated and verified. I did have some trouble and frustrations along the way that I was able to get resolved. They were mostly due to me misinterpreting @anatoxin and @wits-zach's helpful tutorials.

Some thoughts I have on the caveats of ZFS over iSCSI that I feel are definitely worth mentioning:
  • You cannot put LXC containers on your ZFS over iSCSI drive, per
  • You will, however, be able to put VM’s on your ZFS over iSCSI drive.
  • ZFS over iSCSI will give you the ability to take snapshots of your VM’s on a remote drive.
  • I would recommend CEPH over ZFS over iSCSI, because on CEPH you can put LXC containers and VM’s, and take snapshots of each.
Since I was able to get it working, I've created my own guides to hopefully help the community avoid the same pitfalls I ran in to along the way. To avoid cluttering this thread and for my ease of formatting, I've attached them here as PDFs.
 

Attachments

  • Setting up iSCSI in FreeNAS.pdf
    361.1 KB · Views: 556
  • ZFS over iSCSI.pdf
    385.4 KB · Views: 673
Last edited:
What is the advantage of ZFS over iSCSI versus just exposing an iSCSI vol from FreeNAS that is worth these extra hoops of patching proxmox? (Asked as a relative newbie to this space).

Is it just snapshots?

It seems patching proxmox like this could easily lead to upgrade breakages down the line.

Also the link in the wiki seems to indicate this is fixed in FreeBSD 10.x. FreeNAS is on FreeBSD 11 (12 for TrueNAS Core). So are these patches even necessary anymore?
 
Last edited:
What is the advantage of ZFS over iSCSI versus just exposing an iSCSI vol from FreeNAS that is worth these extra hoops of patching proxmox? (Asked as a relative newbie to this space).

Is it just snapshots?

It seems patching proxmox like this could easily lead to upgrade breakages down the line.

Also the link in the wiki seems to indicate this is fixed in FreeBSD 10.x. FreeNAS is on FreeBSD 11 (12 for TrueNAS Core). So are these patches even necessary anymore?
from my understanding each vm will have its own zvol so basically it’s a volume to it self.
yes for snapshots and rollback for an individual vm.

if my understanding is correct the benefit to this is granular Management of the vm storage, replication can be set at the TrueNas/FreeNas layer independent of ProxMox to replicate to another ZFS storage pool as an independent backup.
faster restore from disaster.

if anyone else better insight to this feel free to correct me as the above is my interpretation of how I’ve understood the features of ZFS over iSCSI.

ta
 
hallow!
I have failed at pve-manager/6.3-4/0a38c56f (running kernel: 5.4.98-1-pve)
From deb nothing happend, in UI absent freenas iscsi provider.
then I tried on clean proxmox manuality patching and it answere:

root@sol-pve1:~/freenas-proxmox# patch -b /usr/share/pve-manager/js/pvemanagerlib.js < pve-manager/js/pvemanagerlib.js.patch
patching file /usr/share/pve-manager/js/pvemanagerlib.js
Hunk #1 FAILED at 6183.
Hunk #2 FAILED at 32992.
Hunk #3 FAILED at 33004.
Hunk #4 succeeded at 46736 (offset 13711 lines).
Hunk #5 succeeded at 46753 with fuzz 1 (offset 13711 lines).
Hunk #6 FAILED at 33052.
Hunk #7 FAILED at 33067.
Hunk #8 succeeded at 46810 with fuzz 2 (offset 13711 lines).
Hunk #9 FAILED at 33109.
6 out of 9 hunks FAILED -- saving rejects to file /usr/share/pve-manager/js/pvemanagerlib.js.rej
root@sol-pve1:~/freenas-proxmox# patch -b /usr/share/perl5/PVE/Storage/ZFSPlugin.pm < perl5/PVE/Storage/ZFSPlugin.pm.patch
patching file /usr/share/perl5/PVE/Storage/ZFSPlugin.pm
Hunk #5 succeeded at 172 with fuzz 1 (offset 4 lines).
Hunk #6 succeeded at 199 (offset 4 lines).
Hunk #7 succeeded at 244 (offset 4 lines).
Hunk #8 succeeded at 280 (offset 4 lines).
root@sol-pve1:~/freenas-proxmox# patch -b /usr/share/pve-docs/api-viewer/apidoc.js < pve-docs/api-viewer/apidoc.js.patch
patching file /usr/share/pve-docs/api-viewer/apidoc.js
Hunk #1 succeeded at 39527 (offset 4526 lines).
Hunk #2 succeeded at 39731 (offset 4549 lines).
Hunk #3 succeeded at 40002 (offset 4609 lines).

M.B. I do smth wrong? how can I fix it?
 
Last edited:
Just successfully tested on 6.4-8. Please note:

- Updated from FreeNAS 11.3 to TrueNAS Core 12.0-U4
- GrandWazoo seems to have made recent updates, so I deleted the freenas-proxmox repo I had cloned and re-cloned it
- Appears to be working A-OK
 
  • Like
Reactions: velocity08
just out of curiosity if this was included in the main branch of PVE wold this become maintained by ProxMox from that point on?
maintained in the sense that we test it and do not break it yes, although it would be good if the original author would keep an eye on it if there are any changes necessary (e.g. if an api changes, etc.)
usually the storage plugins do not change *that* much over time, except when we add new features or refactor it
 
  • Like
Reactions: velocity08
hi
thanks for this write up, it helped me set it up on truenas scale and proxmox 7.1
the gui changed a bit but combing everything from all the post helped me get it working.
is there any tutorial on making it more secure? the password is shown in cleartext on proxmox now and using root seems like a bad idea?
feel i saw a forum post or blog about it a long time ago but i cant find it
 
  • Like
Reactions: velocity08
hi
thanks for this write up, it helped me set it up on truenas scale and proxmox 7.1
the gui changed a bit but combing everything from all the post helped me get it working.
is there any tutorial on making it more secure? the password is shown in cleartext on proxmox now and using root seems like a bad idea?
feel i saw a forum post or blog about it a long time ago but i cant find it
How are you finding the experience so far?

interested in any feedback and insight you can provide :)

""Cheers
G
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!