Suggested RAID/HD Controller card for ZFS

mmenaz

Renowned Member
Jun 25, 2009
835
24
83
Northern east Italy
Hi, I've read that ZFS needs "direct control" to the underlying storage, and that often on board sata controller have bad performance (cheap chip...). Also some boards (i.e. Server Intel S2400SC) are only SATA2 (3Gb/s).
So I need a controller expansion board, and a cheap one since there is not need of RAID or BBU or on board cache (ZFS uses plenty of ram / a L2ARC cache on SSD for it).
I've read that the controller it needs to support "JBOD", and I've bought a Adaptec 6805e.
But I've discovered that a) disks need to be initialized before been seen (so adaptec writes some metadata on them... breaks portability to other brand controllers/on board sata channels?) b) in ROM configuration (^A at boot) if set as JBod I've not seen any option regarding controller onboard cache (128MB), so can't disable and/or see if is in writethrough.
Connecting a jbod, ext4 formatted disk to another computer I can access it, though, so the "initialization" seems not to be a real problem.
What I ask is:
a) what controller do you suggest for ZFS that supports 8 disks and is >= 6GB/s? (I need 4 disks for raid10, one SSD for cache/log, one big one for vm backups = at least 6 disks connected)
b) is the Adaptec 6805e set as JBod ok? smartctl -i gives me disk data for each of the /dev/sdX connected (seems ZFS needs to access that info, right?)

Any other tip for my "low cost" server? I.e. Adaptec series 7-8 have "HBA Mode" that AFAIU would be perfect, but they cost around 900 euros, that is too much (I want to use ZFS also to save the money needed for an high end RAID controller with BBU).
Thanks a lot

LAST MINUTE: argh, if I set 6805e ad JBod, the disks are NOT bootable (the controller does not even appear in the MB list of bootable device, while it does if I create a i.e. raid1 volume). This is driving me crazy
 
Last edited:
Thanks for the suggestion. So this does support JBOD and is boot from the jbod disks supported? (i.e. I've only 4 disks attached, I install proxmox as ZFS raid 10, and I can boot from it?). Any other problem I should be aware of about the controller settings (i.e. is not called jbod but something else? What about the controller cache settings? etc). Thanks a lot
 
I like the LSI controllers, both RAID and non-RAID. I have plenty of both of various sizes (4, 8 disks). My proxmox servers run with an LSI 9300-4i (4-Port Int, PCIe 3.0) 12Gb/s SAS/6Gb/s SATA controller attached to Seagate SAS drives. ZFS runs quite nicely on it. I put 64GB of RAM on them, but in retrospect I should have done 128GB at least. My goal with ZFS servers is to keep the disk working set in RAM and not bother with a cache drive.
 
Do you use this controller set as "IT"? Can you boot from it in that setup? Do you have to flash it with special firmware or it works "out of the box"? Last, the sas connector look different from the ones I've in the intel server, or the ones of Adaptec or IBM above cards, is it a problem or there are cables that convert the two formats?
 
Hi,
I use at home the Adaptec ASA-6805H and it works perfect out of the box.
 
Just a small footnote, I believe some raid controllers, when you tell them to expose disks in "JBOD" mode - what they actually do is run "Multiple volume single disk raid0" - ie - it is not 100% strictly speaking true to call it JBOD, as the raid controller is still involved. I know in the past, in some configs, this has meant unfortunate behaviour for ZFS with those controllers. ie, I have read many forum posts despairing certain LSI controllers with BSD NAS Distros (FreeNAS, etc) when using ZFS on top of these pseudo-JBOD raid0 disks, ie, "they work great until things go badly and you lose everything without warning". (I believe the problem exhibits itself as a problem with who-manages-what in terms of disk IO cache behaviour(?); ZFS assumes certain things, as does the raid controller; and if the stars align suitably, you have a bad day. IIRC the endpoint is a kernel panic / a hung NAS/ZFS box and a corrupted ZFS volume with nothing left in it recoverable if you are unlucky, or if you are lucky, it just requires a hard reboot, then it works again "until the next time" when it panics and hangs. But that kind of instability in a storage target can be ... annoying ... especially if it is a production system.).

This is not to say that all LSI controllers do this. To be honest I am not sure which ones are 'guilty'. But it is a known issue with some units, for sure, and if you do a bit of digging it should be possible to find out.

Certainly I am also familiar with the recommendation earlier in this thread, that the IBM controller part is said to be a great unit to use with ZFS. I have no experience with it myself however.

Broadly speaking, any 'simple' non-raid controller card that permits suitable fan-out for disks (ie, basic SAS:SATA controller with support for fan-out) that really does just basic JBOD - should be a decent candidate.

Anyhoo. Just wanted to mention this 'little topic' that is possibly relevant, in case it is something not yet heard.


Tim
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!