Dell PowerEdge direct NVME v.s. PERC H9xx Hardware RAID

Nov 11, 2025
6
1
1
Terneuzen
Hello everyone,

In the past we've always used Dell PERC Hardware RAID controllers with Microsoft Hyper-V and VMware, but we recently switches to Proxmox and we started using ZFS in a lot of cases (we put the RAID controller in non-RAID mode here), where the performance is just better. For a new Dell PowerEdge R660xs server. I'm wondering what would be better when using ZFS in RAID-10 (async 12 and compression on and default ARC settings) in regards to performance?
  • 2 Physical CPU's with the NVME lanes directly connected to the CPU?
    • Should theoretically be faster
      • They'll likely report x4 in this case.
      • Perhaps NUMA could be a performance bottleneck?
      • Memory should be faster than on-chip RAID controller memory?
  • A Dell PERC H965i RAID Controller with Write Cache
    • I noticed that the Dell PERC H965i controller puts the NVME SSD's in x2 mode
    • In theory NUMA / Interconnect could be a performance bottleneck here too as the NUMA / Interconnect from processor 2 would need to access processor 1 for accessing the hardware RAID controller or vice versa (depending on which PCI-E lanes the RAID controller is connected)?
Note that the performance is the most important aspect in this case.
 
Note that the performance is the most important aspect in this case.
Performance beyond your requirements is pointless. Establish your minimum acceptable performance, then benchmark with ZFS since ZFS gives you the full spectrum of features (snapshots, inline compression, file aware checksum, etc.) IF you are able to reach your requirements you're done- if not, then proceed to benchmark the RAID controller. It is entirely possible that neither will satisfy the requirement, and might require a different solution alltogether.
 
Performance beyond your requirements is pointless. Establish your minimum acceptable performance, then benchmark with ZFS since ZFS gives you the full spectrum of features (snapshots, inline compression, file aware checksum, etc.) IF you are able to reach your requirements you're done- if not, then proceed to benchmark the RAID controller. It is entirely possible that neither will satisfy the requirement, and might require a different solution alltogether.
From our finding is that in the performance that we need is faster than hardware RAID, especially with RAID10. Was just wondering about the experience of other users. The snapshot and other features are a nice enhancement too (which we make use of).
 
Note that the performance is the most important aspect in this case.

ZFS works hard to assure integrity of the data it handles. This usually requires several actual writes per single write command. (Possible "write amplification" plus handling metadata, depending on the pool architecture.)

A stupid Raid controller which ignores this and has a good battery for a write cache is probably always faster than ZFS.

That said... I prefer and use ZFS wherever I can. The multitude of features (guaranteed integrity, redundancy, technically cheap snapshots, transparent compression, zfs send/receive, ..., ...) is much more important to me.

As usual: your mileage my vary :-)
 
  • Like
Reactions: leesteken
ZFS works hard to assure integrity of the data it handles. This usually requires several actual writes per single write command. (Possible "write amplification" plus handling metadata, depending on the pool architecture.)

A stupid Raid controller which ignores this and has a good battery for a write cache is probably always faster than ZFS.

That said... I prefer and use ZFS wherever I can. The multitude of features (guaranteed integrity, redundancy, technically cheap snapshots, transparent compression, zfs send/receive, ..., ...) is much more important to me.

As usual: your mileage my vary :-)
Per my testing on several environments previously I found that using ZFS is actually faster (in most cases), due to the fact that the hardware RAID controller only exposes x2 (as it supports 8 drives and 8x x2 is 16; a full PCI-E slot. I also found that the IOPS are slightly higher and the random read/write is faster, so ZFS it's going to be.
 
  • Like
Reactions: UdoB
Based on our experience testing the HBA9500 with Kioxia NVMe SSDs,

the Tri-Mode controller does not offer any speed improvement over SAS when using NVMe; in fact, it is slower.

Therefore, there is no point in purchasing NVMe storage specifically for the Tri-Mode controller.
 
Last edited:
RAID controllers have been outdated tech for about 20 years now.

If your RAID controller is faster than ZFS, it is lying to you - what is it lying about - typically having committed your data to disk. RAID1 would require redundant controllers, with redundant RAM and redundant batteries (and those things exist, typically on expensive external RAID) otherwise it is just more expensive RAID0, because at some point, one of those things THAT STORES YOUR DATA (but not redundantly) is going to fail, typically during/after a power outage. I should post a picture of the RAID controller with a bloated Lithium battery.

And have you ever tried to recover from a proprietary RAID controller - now you have to scour eBay or Amazon for an exact match and hope the config was saved on the disk and not an on-board chip.