ZFS vs Hardware RAID

Alessandro 123

Well-Known Member
May 22, 2016
653
24
58
40
Anyone did some performance comparison ?

I hadn't time to do these test on my own (as wrote on a different thread)

a comparison between RAID1 (hw and zfs) and RAID6 (hw and zfs) would be nice.
RAID10 should not affect the write performance at all.
 
You will not use ZFS for its superior speed, you will use if for its integrity compared to a hardware RAID controller (e.g. silent data corruption protection, faster resync speed, on-the-fly-repair etc.).
 
  • Like
Reactions: morph027
You will not use ZFS for its superior speed, you will use if for its integrity compared to a hardware RAID controller (e.g. silent data corruption protection, faster resync speed, on-the-fly-repair etc.).
Yes, you need a little bit more HW for ZFS, but not a lot. I think important is the ARC/Cache/log. We have no performanceproblems. In us tests ZFS was much more faster, because of filesystemcompression... So we never use HWraid. But yes depending on your hardware.

My personal opinion is ZFS is the easiest and most elegant way. So put more disks and cache in and use ZFS ;)
 
I agree. IMHO, i'm a big fan of kernel developers (non directly related to ZFS), so I really prefere mdadm to hardware raid.
all hardware raid controllers has a proprietary firmware developed by each vendor, noone could know any bugs or issues that could arise and all hardware controller are much less flexible than software raid.
If you loose the hardware controller, you have to pray to be able to replace it with another, identical, one. You should use the same firmware on both controller. If the broken controller had an older firmware, there is a risk that newer firmware isn't able to load the older configuration stored on disk.

The only drawback of ZFS is ti's inability to add disks to an existent RAIDZ volume. mdadm is able to do that, I can grow an existing RAID-5/RAID-6 array by adding single disks. ZFS don't. If you want to exted a RAIDZ-2 you have to add 4 more disks.

Stupid question, as i'm new to ZFS: is the RAID-Z/Mirror configuration stored on disks like mdadm, right ? If all is gonna bad, i'll be able to move disks between servers and start them up like with mdadm ?
 
The only drawback of ZFS is ti's inability to add disks to an existent RAIDZ volume. mdadm is able to do that, I can grow an existing RAID-5/RAID-6 array by adding single disks. ZFS don't. If you want to exted a RAIDZ-2 you have to add 4 more disks.

Yes, that is so. Yet it is the same with most hardware raid controllers, even SANs that can only grow this way. It also makes totally sense, because often you do a RAID10 or RAID50, so you have to add at least 2 disks for RAID1 or 3 for RAID5.

Stupid question, as i'm new to ZFS: is the RAID-Z/Mirror configuration stored on disks like mdadm, right ? If all is gonna bad, i'll be able to move disks between servers and start them up like with mdadm ?

Of course you can. It is also "migratable" between ZFS implementations, e.g. I moved a pool from FreeBSD to Linux without a problem. I can also open some pools on MacOS (has lower ZFS version, therefore not all features are available and therefore not mountable, but if you plan ahead it works).
 
Yes, that is so. Yet it is the same with most hardware raid controllers, even SANs that can only grow this way. It also makes totally sense, because often you do a RAID10 or RAID50, so you have to add at least 2 disks for RAID1 or 3 for RAID5.

I'm sure that with "most hardware raid controller" you are referring to less than 20% of market share.
All LSI, DELL, HP SmartArray, EqualLogic, IBM/Lenovo and LeftHand are able to grow a RAID-5/6 by adding a single disk.
These controller are almost the 80% of market share
 
I do not agree with only "20% of market share". Most hardware raid controllers I encounter do not even have RAID5 (possibility or enabled via licensing), because they do not need it. Almost all servers I work on are simple 2-disk boxes with FC or iSCSI and at least one connected SAN, so there is no need for that. I seldomly encounter the big servers we used to know 10 years ago with a lot of local disks. Maybe the "market share" depends on the environment you're working on, but at least for me and the companies I work with and for (25+ employees) I don't "see" those.
 
The only drawback of ZFS is ti's inability to add disks to an existent RAIDZ volume. mdadm is able to do that, I can grow an existing RAID-5/RAID-6 array by adding single disks. ZFS don't. If you want to exted a RAIDZ-2 you have to add 4 more disks.
You are mixing things here. It is always possible to add a vdev to a pool but if you want to keep the same raid layout there are limits to the new vdev. This is no different from mdadm or any other raid software. What you are referring is to extend an existing vdev (a hardware raid controller can be compared to a single vdev pool) which is true that zfs does not support at the moment but it is on the todo list for openzfs developers but this is very complicated. This article explains it quite well -> http://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html
 
You are mixing things here. It is always possible to add a vdev to a pool but if you want to keep the same raid layout there are limits to the new vdev. This is no different from mdadm or any other raid software. What you are referring is to extend an existing vdev (a hardware raid controller can be compared to a single vdev pool) which is true that zfs does not support at the moment but it is on the todo list for openzfs developers but this is very complicated. This article explains it quite well -> http://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html

Yes, I was referring to exanding a vdev by adding single disks
 
But generally, we'd deciced to trade in all the other pros of ZFS for some minor performance glitches with controller.

- flexibility
- costs (the LSI CacheCade/Nytro thing is friggin' expensive)
- again, flexibiilty
- well proven to scale out heavily (just google for ZFS best practice with 99 disks ;) )
- not lost data since we are using it, even with hardware failures (we'd lost huge amount of data to a failed raid controller once)
- SNAPSHOTS! (our backup loves it)
... probably many more...
 
  • Like
Reactions: Alessandro 123
- not lost data since we are using it, even with hardware failures (we'd lost huge amount of data to a failed raid controller once)

Can you details this? Which controller, which kind of failure and so on..... as I'm using hardware raid almost everywhere :)
 
If I understood properly, ZFS is faster than hardware raid ?
Which raid level are you using ?

Uh...good catch ;) I can't remind properly, just that i did test with equal levels. Best guess is either RAID5 vs RAID-Z1 or RAID6 vs RAID-Z2.

As always, benchmarks are way too generic...perfomance really depends on your workload (file server / database / ...) But since we're using ZFS, I/O wait never ever was a bottleneck.
 
Can you details this? Which controller, which kind of failure and so on..... as I'm using hardware raid almost everywhere :)

Was some old 3ware Controller which just died and unfortunately the spare controller from stock was a newer hardware revision which was unable to read the layout (and not possible to downgrade firmware). Took us same days to restore backup from tape which i never ever wanted to do anymore ;)
 
Was some old 3ware Controller which just died and unfortunately the spare controller from stock was a newer hardware revision which was unable to read the layout (and not possible to downgrade firmware). Took us same days to restore backup from tape which i never ever wanted to do anymore ;)

I don't use 3ware (but LSI, same vendor) but this is exactly what I would like to avoid.
But currently I use XenServer (i still curse the day i choose that stupid software) that doesn't support any software raid. I hope to move everything in PVE with ZFS soon, but VM migration is not easy as some conversion are needed.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!