Use Cases for 3-Way ZFS Mirror for VM/LXC Storage vs. 2-Way Mirror?

Sep 1, 2022
484
182
48
41
Hello,

I'm in the process of building up a new Proxmox server in my small home office, and this motherboard has 3 NVME slots. My boot disks are an enterprise SATA SSD pool, so I'm free to use the NVME however I want for data storage.

Previously, I've had good success with storing VM and LXC containers on a two-NVME mirror. Would it be worth it to build a 3-way NVME mirror for storage, since I've got the extra NVME slot? I don't think the enhanced read performance would really help me; 2x NVME read speed is already fast enough.

I thought about putting all three into a RaidZ1 for VM storage, but even with PCIe 4.0x4 NVME speeds, I'm wary of using a Z1 for VM/CT storage.

I'm also planning to virtualize an instance of PBS. If I go with a 2 NVME mirror, I think I'll use the third slot to give PBS a real NVME drive to boot from.
 
I use 3-way mirrors for my backups (PBS). If it were a 2-way mirror then my data would be at risk of losing everything as soon as one side of the mirror fails.
I use a 2-way mirror for everything else to prevent interruption of service when one side fails (but I do regular backups to prevent data loss).
In my opinion that is the difference between two and more mirror configurations: two devices is for continuous operation, more than two devices/copies is for data safety.
 
  • Like
Reactions: Johannes S and UdoB
IIRC a three-way mirror has also more IOPS than two-way
Indeed! For PBS I combined 2 HDD with 1 SSD in a 3-way mirror (even though people recommend against mixing such different drives) and there was a huge improvement in the read IOPS (especially when looking at the contents and garbage collection). But in general a 3-way mirror has 50% more read IOPS than a 2-way mirror.
 
Last edited:
  • Like
Reactions: Johannes S
IIRC a three-way mirror has also more IOPS than two-way
Thanks for mentioning this! I was wondering about it.

That's just for reads, though, right? I can't imagine how adding another disk would result in faster writes.
EDIT: My eyes glitched. I missed the last part of @leesteken 's message. 50 percent better read IOPS is a huge deal.
That does does like a great setup for PBS. :)

I still need to decide if I'm going all in on shared, networked storage for VM and LXC on this PVE node. If I am, then I can use the three NVME slots in the box for a three-way PBS pool. :)
 
Last edited:
Indeed! For PBS I combined 2 HDD with 1 SSD in a 3-way mirror (even though people recommend against mixing such different drives) and there was a huge improvement in the read IOPS (especially when looking at the contents and garbage collection). But in general a 3-way mirror has 50% more read IOPS than a 2-way mirror.

The wild thing here is that someone recommended not using a hybridized approach. If you have 3 drives, you have a quorum that enables you to know if files are corrupt, as well as having 2 copies if 1 drive fails. The fact that they have different performance doesn't cause it to fail or otherwise not function. That's what RAID is. That's why hybrid HDD/SSDs are also a product; you gain the benefits of both -- lower cost with HDD spindles for the data, and higher performance with flash for the active-set of data reads. That's the same thing (on a 1-drive level), as your 3-way RAID with 2x HDD spindles and 1xSSD (except better, because 3x1TB (so 3TB of the same 1TB) has the entire 1TB at the speed of SSD for read, plus you have 3x redundancy of data (1-2 drives can fail, and you're still safe).

Using multiple disks is a logical solution that enables better performance. The drives store data, and the mechanism doesn't matter *because that is the point of such an abstraction*; there is an interface which is standard which hides the details. For example, I can use a file path -- it doesn't matter if the path refers to a file on an SSD, HDD, or even a tape drive. The OS works at the level of a filesystem and file path (its VFS layer), and the drives are below that; I just use the file path, and I don't have to think about cylinders/heads/sectors, etc -- that's what the VFS layer of the kernel is for.

Those people didn't understand the technology they were working with. Bad recommendation :(

( https://en.wikipedia.org/wiki/RAID ;; "Each scheme, or RAID level, provides a different balance among the key goals: reliability, availability, performance, and capacity.")
 
Last edited:
  • Like
Reactions: Johannes S