ZFS over iSCSI with QNAP QuTS hero

Oct 13, 2020
42
2
28
44
Hello everyone,

I wanted to implement ZFS over iSCSI with my Proxmox-Cluster (4 nodes) and my QNAP SAN connected via 25G Ethernet. I have found a lot of forum entries on some relevant topics but I am missing a bit a general guidance.

What I found is that there needs to be a "plugin" (according to https://pve.proxmox.com/wiki/Storage:_ZFS_over_ISCSI) that handles the ZFS stuff via ssh on the storage device. The question I have is how to get the iSCSI connected to the Proxmox hosts? In my setup it seems to simply ignore my attempts without any further notice. I am even not sure if the QNAP device is already properly set up and willing to perform.

Any help would be highly appreciated.

Thank you and best regards,
Nico
 
The ZFS/iSCSI plugin takes care of both ZFS carving and iSCSI export/connectivity. However, it has specific requirements around storage management and iSCSI daemon types.
The native plugin may not be compatible with your particular NAS. I'd recommend that you look around google/github for experiences that are similar to yours.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
quts 5.2 uses the scst target. It should be relatively easy to write the plugin to facilitate zfs-over-iscsi. I'll look for documentation for user provided plugins- maybe I'll write it.
Any updates? Curious if you wrote it? I am in the same quandary, I have a Proxmox cluster of three nodes, I originally set it up (still same config), with iSCSI coming off a QNAP 10Gb interface, shared LUN over iSCSI for HA, only to realize that I can't snapshot the VM's. :(. My bad for not researching. So I'm looking at maybe moving over to NFS, but would like to stay the course of iSCSI, but if for not, I'll migrate. Interested in thoughts and opinions.

Thanks!
 
I left it when it started to expand in scope. The basic command and control is easy enough to implement, but the orchestration for snapshots restoration and maintenance of pve inventory presented an open ended hours investment I simply dont have the resources for. Moreover, this was a one off situation as I am neither a reseller of QNAP or have it in my line card, so I dont see a way to recoup that investment. I had thought about (briefly) adding qnap as a supported product but discarded it quickly enough.
 
  • Like
Reactions: Johannes S
Completely understand. I've been back and forth on several things on this build, as too much time vs reward. If you don't mind me asking one more question, did you give up the whole thing, or did you just go NFS for VM's, and if so, how was the performance/how did it work for you? I'm probably just going that route, too much time for this, what should be a simple thing.

Thanks in advance!
 
Oh this particular QNAP was purchased by the customer without any input from my team.

I ended up setting up an iscsi target and a nfs share on the same triple rank raidz2 (again, at customers behest.) Both methods work, and I left it up to their IT staff to decide how best to use it. I dont ever use this type of hardware in my deployments so take it fwiw. I did perform so quick and dirty perf comparisons and GENERALLY iscsi behaves better, but the admin overhead is a lot more and you lose out snapshots and all related functionality.

At this stage, If a customer requests traditional SAN storage, its going to be a Vmware or XCPNG- unless they choose Blockbridge which adds the missing piece. I am aware that the pve team is working on revamping the entire PVESM api and that may change things, but not yet.