Hardware RAID or Software RAID w/ SSD cache?

casalicomputers

Renowned Member
Mar 14, 2015
89
3
73
Hello,
we're evaluating Proxmox VE 3.4 and its performance.
We'd like to know what are the pro and cons about having a proxmox host with an hardware RAID controller (with its own cache) against using ZFS software RAID with ZIL and L2ARC cache. Are these two solutions comparable in terms of performance and reliability?

Thanks,
Michele
 
The biggest difference is that the RAID controllers do not check for data corruption (what HDD returns or silence corruption like bad cable). ZFS is COW file system so it protected from power lost (don`t corrupt FS) and no need to do file system check like others FS needs.

About performance ZFS need/love RAM. Its speed his works (ZFS do a lot of stuff). You can manage ZFS cache for speed up read by adding more RAM. With low latency HDD/SSD/SSHD you can speed up sync writes.

If you cant dedicate RAM for ZFS (like 10 GB is good start) you will suffer from slow ZFS work and iowait.

As of traditional RAID you can choose any FS (ext series, xfs, raiserfs ...) you want with their possibilities.
 
Another important thing. You must use ECC ram if you go for ZFS.

To your question:

If your VMs is stored on your proxmox nodes and your amount of RAM >= 32GB ZFS is an option. With such hardware and providing you have at least 3 proxmox nodes you could also consider Ceph.

If your VMs is stored on a remote storage I would recommend using a hardware raid with proper BBU and stick to the default proxmox install. If remote storage is running on less than 3 nodes I would recommend using ZFS on the storage nodes. If remote storage is running on more than 2 nodes I would also consider Ceph.
 
By HDD controller you mean HBA?

As a side not. If you have a HBA with connected expanders you should only use SAS disks since SATA disks connected through a expander is proven to fail now and then since expanders does not carry SATA control commands.
 
By HDD controller you mean HBA?

As a side not. If you have a HBA with connected expanders you should only use SAS disks since SATA disks connected through a expander is proven to fail now and then since expanders does not carry SATA control commands.
Hi,
sure?

I have on my ceph-nodes only sata disks on an SAS-HBA (LSI SAS9201-16i) and the disks are able to use normal sata commands.

NCQ is 32:
Code:
root@ceph-01:/home/ceph# cat /sys/block/sdb/device/queue_depth
32
root@ceph-01:/home/ceph# cat /sys/block/sdb/device/model
HGST HUS724040AL
About "proven to fail": Only with the actual firmware (crappy lsi). All Firmware above P17 is defect - tested till P20.
With Firmware 17 the hba run very stable.

Udo
 
Thanks for the replies.

I contacted Dell support to ask which controllers are shipped with their poweredge R730 servers, and they are:
- H330 (LSI SAS3008)
- H730 (LSI SAS3108 w/ 1GB cache)
- H730P (LSI SAS3108 w/ 2GB cache)

Since Dell doesn't ship HBAs but just RAID controllers with their servers, I'm going to consider the smallest and cheapest one (H330), configuring disks in passthrough/JBOD mode (are they the same thing, aren't they? :confused:). Dell support warned me saying that if I used the H330, I could suffer storage bottlenecks and suggested to use at least the H730.

Sincerely, I think there would be no loss of performance on using any of those controllers with ZFS, since I won't use hardware features (cache, BBU, ...) at all...
What do you think?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!