[SOLVED] Hardware Raid or ZFS

Yes please still enjoy. Once setup a couple of servers each with 16x Intel P4600 1.6TB U2 nvme and run lots of zfs config benchmarks 2TB/test (about 2min) parallel and results in best as nvme namespace reformat from 512b to 4kb and ashift=13 (18GB/s w, 19GB/s r), so I just share that and it's not a rule, I just say I would do so as in sum the results represents multiple tens of TB writes and over 100TB reads. Still this change to nvme types but it's an evaluated measurement.
 
  • Like
Reactions: Johannes S
If the speed of restoration is crucial, avoid RAID5 hardware. Use RAIDZ or RAIDZ2 with ZFS instead for better overall performance.
Rebuild on production (!!) fileserver for data and app's a 16TB hdd in hw-raid6 take about 25,5h and this doesn't change with number of disks in raidset nor if the filesystem is empty or full, highly load can lead to 1h more.
24x 16TB in 4 vdevs a 6 hdd's in raidz2 on production fileserver took for pool 349 TB, 100 TB / 28% used 15h, further disk while had 114 TB / 32% used 32h. You can compute by your own the resilver time if the zfs would be double or triple times full.
 
  • Like
Reactions: Johannes S
Next week zfs 2.3 will be released, mmh, so maybe available in pve as 8.4 next ... ??
 
This is all getting a bit over my head but what I can tell you is I used an old motherboard I had spare, https://www.asrockrack.com/general/productdetail.asp?Model=D1541D4U-2T8R#Specifications 64GB ECC RAM, 4 x Kingston DC600M Series 3.84TB in a ZFS striped mirror, as recommended by a proxmox admin.

I migrated a vm from my home, of 139GB to a datacenter, ie restored from PBS and it took just over half an hour. That will do me just perfectly :) I do have 2Gb/2Gb net to do it on though.
 
OpenZFS 2.3.0 is released today ... but as you know not within pve available yet ... but new zfs release could be tested in vm/host/manually ported into test-pve ... :)