Hardware Raid or ZFS

Yes please still enjoy. Once setup a couple of servers each with 16x Intel P4600 1.6TB U2 nvme and run lots of zfs config benchmarks 2TB/test (about 2min) parallel and results in best as nvme namespace reformat from 512b to 4kb and ashift=13 (18GB/s w, 19GB/s r), so I just share that and it's not a rule, I just say I would do so as in sum the results represents multiple tens of TB writes and over 100TB reads. Still this change to nvme types but it's an evaluated measurement.
 
  • Like
Reactions: Johannes S
If the speed of restoration is crucial, avoid RAID5 hardware. Use RAIDZ or RAIDZ2 with ZFS instead for better overall performance.
Rebuild on production (!!) fileserver for data and app's a 16TB hdd in hw-raid6 take about 25,5h and this doesn't change with number of disks in raidset nor if the filesystem is empty or full, highly load can lead to 1h more.
24x 16TB in 4 vdevs a 6 hdd's in raidz2 on production fileserver took for pool 349 TB, 100 TB / 28% used 15h, further disk while had 114 TB / 32% used 32h. You can compute by your own the resilver time if the zfs would be double or triple times full.
 
  • Like
Reactions: Johannes S
Next week zfs 2.3 will be released, mmh, so maybe available in pve as 8.4 next ... ??
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!