I don't think a single volume created with HW RAID can be repaired using ZFS Scrub. Is that configuration really okay...?
It seems to me like we're losing one of ZFS's major advantages...
https://openzfs.github.io/openzfs-docs/man/master/8/zpool-scrub.8.html
> For replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any damage discovered during the scrub.
https://openzfs.github.io/openzfs-docs/Performance and Tuning/Hardware.html#hardware-raid-controllers
So its more a decision of reliability vs performance.
But thanks for the detailed benchmarks.
edit : If you are currently using ZFS and not using mirroring with hardware RAID, or if you have no plans to revert to hardware RAID, you can ignore this.
It seems to me like we're losing one of ZFS's major advantages...
https://openzfs.github.io/openzfs-docs/man/master/8/zpool-scrub.8.html
> For replicated (mirror, raidz, or draid) devices, ZFS automatically repairs any damage discovered during the scrub.
The ZFS documentation got a nice summary with points why not to use ZFS on top of HW raid:I already know all the warnings regarding this configuration, but since in most cases the references mentioned are experiments on small home-lab, issues on cheap hardware and so on, I would like to cover this topic in the "enterprise servers" context.
https://openzfs.github.io/openzfs-docs/Performance and Tuning/Hardware.html#hardware-raid-controllers
So its more a decision of reliability vs performance.
But thanks for the detailed benchmarks.
edit : If you are currently using ZFS and not using mirroring with hardware RAID, or if you have no plans to revert to hardware RAID, you can ignore this.
Last edited: