Best RAID for ZFS in Small Cluster?

alexinux

Member
Aug 21, 2023
37
2
13
Hey all,

So I have a 3 node cluster in my environment:

PVE1 has 2 files servers and 2 containers, mostly read operations.
PVE2 has 2 file servers, a WSUS, a Data Backup server and a license manager, mostly write operations
PVE3 has the domain controller, a PXE Server and a Vulnerability scanner

The first 2 PVEs currently have (8) 2TB drives in Raid5/Z1 ZFS and thel ast PVE has (4) 2TB drives in a RAID5/Z1 ZFS. I want to upgrade the 2TB drives to 4TB drives because I'm nearing 80% storage use and in the PVE1 & PVE2 but don't know if I should go for another RAID configuration or even move some VMs/LXCs around to get better performance as well as better redundancy. All PVEs has an external USB drive to backup the VMs and I've seen that every other month these external drives appear unmounted and I end up restarting the PVEs because they wont mount with any command. Will it be better to connect these backup drives via SATA on each PVE, even if it gives me 1 less drive for the RAID setup?

I compared RAID 5, RAID 6, and RAID 10 and can see that its close between 6 and 10 but wanted to have other opinions and see what works or has worked out there.
 
RAID5 (with BBU) is almost, but not quite, entirely unlike RAIDz1 (with drives without PLP). RAIDz1/2/3 is not good for VMs but can give you more usable storage (but padding can be terrible for space): https://forum.proxmox.com/threads/fabu-can-i-use-zfs-raidz-for-my-vms.159923/#post-734639
I get that its not great for VMs but the IT here used to have the servers on the baremetal with no redundancy for the data. I went with proxmox and RIAD Z1 because of redundancy. I can do hardware RAID as all the hosts are able to but went with software (Proxmox) RAID out of convenience of doing everything through the PVE. What I want to know is if there is a better setup than Z1 for better redundancy and performance. We already have the 4TB drives but before commiting to a particular path wanted to see what would be best.
 
What I want to know is if there is a better setup than Z1 for better redundancy and performance.
I though the other thread explained that (stripes of) mirrors have better read performance and never worse redundancy. If you want more redundancy and more (read) performance then a 4-way mirror has quadruple the read performance and triple the redundancy.
RAIDz1 with four drives has a lot of padding overhead and will probably only give you two (instead of three) drives of usable space.
A stripe of two mirrors (also four drives) gives twice the read/write IOPS (with also two drives worth of space) and also one drive redundancy, or even two if they are not part of the same mirror.
None of this is Proxmox specific and other ZFS guides will also apply. There are also lots of other threads on this about RAIDz1, stripes of mirrors and IOPS.
 
  • Like
Reactions: UdoB and Johannes S
What I want to know is if there is a better setup than Z1 for better redundancy and performance.
If by redundancy you mean disk fault tolerance, the higher the value after "raidz" the higher the fault tolerance. In practice, raidz2+ (never use single parity raidz unless prepared to lose the pool at any time)

performance= striped mirrors. full stop. if you wish to sacrifice some performance in the name of capacity efficiency and/or fault tolerance, you can move to a raidz2 with the maximum number of vdevs available to you. so, as an example. if you have 8 drives, you can set up a 4 vdev mirror stripe, OR 2 vdev raidz2 stripe. both ways will yield the SAME usable storage capacity, but the striped mirror will be faster while the raidz2 pool have more disk fault tolerance. The more drives you have available the more flexibility you have- for example, 18 drives can be set in a 3x 6 drive raidz2, which will have reasonably low padding (16k stripesize) and a lot more performances then a single vdev (nominally 3x but likely more because of the enormous stripesize on a single vdev 18 wide pool)
 
  • Like
Reactions: UdoB and Johannes S