FreeNAS backend for ZFS over iSCSI

lweidig

Active Member
Oct 20, 2011
104
2
38
Sheboygan, WI
Wondering if this is yet a supported environment? At one point I know in the development chain it was being looked at and worked on. We have toyed around with other back end ZFS solutions and really have not been pleased with them. Admittedly we have almost NO Solaris experience so all of them based on that prove to be VERY difficult to maintain or troubleshoot when we have issues. We have tried Nexenta, but to get a commercial license seems too expensive.

We would also like something with a GUI if possible. They have really done some nice updates with FreeNAS lately running updated FreeBSD and much enhanced driver support.

Really want this to be my "final" solution. Any thoughts on how to get this supported if it is not.
 
Yes, we tried that but again the underlying OS scares me as we have LITTLE internal knowledge of how it works. We are looking for either a Linux or FreeBSD based solution.
 
IMHO ZFS on Linux is to immature to use enterprise wide. The ZFS implementation on FreeBSD is mature enough but unfortunately the iSCSI implementation is not mature enough - ctld. istgt is not suited in any way for enterprise usage. FreeNAS is unusable for proxmox since the iscsi user interface in FreeNAS only works with input entered through the FreeNAS gui which means you are unable to configure it from proxmox.
 
I have to disagree with the general comments being made here, some of it just false. FreeNAS for one has an API since 9.2 that allows for all of this to be created / viewed: http://api.freenas.org/index.html. We have used with great success Openfiler which is Linux based iSCSI and pushed NICs nearly to 100% capacity with really little tweaking. The problem of course is that product is dead and not being developed or even patched for security issues.

We are each of course allowed our own opinions and I respect yours, I just tend to disagree.
 
I'm interested in this as well.

FreeNAS 9.10 has the ctld iscsi provider now, not istgt. Can Proxmox use the ctld provider? If not, are there any plans to implement it? From what I can tell, that is the final hurdle.
 
Well, I recently tried using FreeNAS as a VM in proxmox with a passed through LSI SAS controller. This configuration had worked for me since ESXi 5.0 a few years back under ESXi, but I guess it just didn't like KVM, and it caused a bunch of checksum errors. I wound up exporting thee pool and importing it back in natively in the host, and it appears to be working very well. I wouldn't be so quick to dismiss ZFS on linux quite yet.

But I digress, as you are not talking about local storage, but rather remote storage, presumably to host your VM images.

Do you really need iSCSI? I know it is all the rage in enterprise IT circles, but I actually find it to be a quite terrible solution. I've always attributed its popularity to some sort of mass hysteria and hyping over false "best practice" than any rational reasoning. I'd mount a NFS share directly for VM image storage over using iSCSI 100 out of 100 times. iSCSI is more prone to data loss because it uses async writes. Furthermore it is a nuisance to work with, as it requires creating disk images that can only be accessed by one system at a time, as opposed to NFS which can be accessed by as many clients as needed at any given time. You also have to guess at the disk images size in advance, and either worry about using sparse images that grow as they get filled, and all their inefficiencies (constantly having to trim them, etc.) or using thick images populated in advance and waste a ludicrous amount of space.

I really never understood why iSCSI took off. IMHO, it is just about the worst remote drive system you could possibly use for any purpose.

I wouldn't touch iSCSI with the proverbial 39 and a half foot pole, if I had an option.
 
Last edited:
  • Like
Reactions: SamTzu
Thanks for the input. What about throughput? Is it as good as iSCSI? Also there are a few things that Proxmox can use with ZFS over iSCSI such as snapshot management, etc.
 
FreeNAS for one has an API since 9.2 that allows for all of this to be created / viewed
I have been studying the documentation and this documents that it fully supports the requirements for Proxmox. However, it will require a complete rewrite of the plugin and at the moment I do not have time for this (partly due to that I do not use FreeNAS myself I am not as motivated as I could be:) ) If you decide to write such a plugin I would be willing to assist, test, and do code review.
 
  • Like
Reactions: dbayer
Thanks for the input. What about throughput? Is it as good as iSCSI? Also there are a few things that Proxmox can use with ZFS over iSCSI such as snapshot management, etc.
iSCSI provides way higher iops than NFS and contains a layer lesser between disk and client than NFS.
 
Thanks for the input. What about throughput? Is it as good as iSCSI? Also there are a few things that Proxmox can use with ZFS over iSCSI such as snapshot management, etc.

In my own testing I have found that with all things equal, NFS provides throughput as well as, or slightly better than iSCSI, but pay attention to the "all things being equal" part.

iSCSI uses async writes, which improves write performance by "lying" to the OS that has mounted it and telling it that data has been committed to disk, when in fact it has actually only been received in memory. It then writes it to disk as soon as it is able. This provides better write speeds, but it means that there is a potential for a few seconds worth of writes to get lost in case of sudden power loss, or a kernel panic/crash. (A few seconds of data may not sound like much, until it corrupts a drive image or database...)

NFS can be configured to use either async writes or sync writes with ZFS. With async writes it behaves much like iSCSI and there is a potential loss of a couple of seconds of write data. In async mode, it performs similarly too, or maybe slightly better than iSCSI in my experience. With sync writes on, it actually commits the written data to a stable disk before informing the client OS that the data is written. This eliminates the risk of that second or two of data being lost in case of an unclean shutdown, but it comes at the expense of a performance penalty, as now you essentially don't have a write buffer.

With ZFS you can compensate for this performance loss experienced from sync writes by using a - so called - SLOG device. ZFS always maintains a log of writes it intends to perform, with enough data to recreate those writes in case of an unclean shutdown. Every single write is noted in the ZIL (ZFS Intent Log) and maintained there until that write is committed from memory to disk, at which point it is discarded. The ZIL is only read from on boot after an unclean shutdown in the case where writes need to be recreated.

Normally this ZIL is written to the hard drives in the pool, which can be slow. A SLOG is a "Separate LOG" device, essentially a dedicated SSD (well, a mirrored pair is usually recommended for production systems) which contains this ZIL. Since it only needs to hold a second or two of data, it can be very small, but it is recommended that it has high write endurance and is low latency to speed things up, and also that it is of an enterprise variety that has capacitors that keep it powered on long enough to commit writes from its internal buffers to the flash in case of sudden power loss.

With a SLOG in place for the ZIL, sync writes can be much much faster than without one, but are still typically slightly slower than async writes. (nothing is faster than RAM, right? :p )


So, to reiterate, in my testing NFS performed equal to or slightly better than iSCSI when NFS was in async mode. In sync mode it can range from equivalent to (with very good and expensive SLOG devices) to slightly slower (with typical SLOG devices) to much slower if the ZIL is in the pool with the hard drives. Keep in mind this is only for writes though. The reads should be the same either way.

Also, keep in mind, this is my testing in my workloads. I'm not sure what your workloads look like, and would recommend you do your own performance tests.
 
iSCSI uses async writes
This is completely wrong. As with NFS iSCSI as well can be configured to run in both sync and async mode, this configuration is even available in the Proxmox GUI - you might have missed this setting?
in my testing NFS performed equal to or slightly better than iSCSI when NFS was in async mode
I think you have only done testing with typical file server load. If you make testing with typical database or OLTP load iSCSI outperforms NFS in an order of magnitude.

You should also add to the equation that NFS has much higher client CPU utilization than iSCSI. This is mainly due to fact that NFS is working on the VFS level while iSCSI is working at the block level.
 
I have been studying the documentation and this documents that it fully supports the requirements for Proxmox. However, it will require a complete rewrite of the plugin and at the moment I do not have time for this (partly due to that I do not use FreeNAS myself I am not as motivated as I could be:) ) If you decide to write such a plugin I would be willing to assist, test, and do code review.

I'll vote for that! :)
 
Good day to all.
I found a repo in the Github by Andrew Beam 'github.com beam/freenas-proxmox' that seems to be a nice fit for the FreeNAS solution. It does NOT use the standard SSH/SCP architecture that the other interfaces use. It instead uses the FreeNAS API's to do all the work, it just needed some updates to work with FreeNAS 11 and a few features to better fit the needs of the users (or me). So I forked it and made those changes.
They can be found at the following link...

github.com TheGrandWazoo/freenas-proxmox

I have it running on a cluster and did drive migrations from an Openfiler NFS services to the FreeNAS 11 with ZFS over iSCSI for 14 VM in the cluster without issues. I also did live migrations of VM between the servers while using the ZFS over iSCSI for FreeNAS and had not issues.

Now, we are human so I am sure there might be somethings that I overlooked. Maybe the Proxmox team could add this to their <head> repo and make it an included feature in their mainstream code.

One thing...when you apt-get update and apt-get dist-upgrade you will have to re-patch the files necessary to things to work.

Check it out and hope all works well for you. And thanks to Andrew Beam for writing the original code.

Sorry about the links, new users are NOT allowed to put external links in a post.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!