Minimal HA cluster with local storage only?

A98bt

New Member
Feb 24, 2025
2
0
1
Dear forum,

I'm a bit lost about what might be the best way to set up a very small HA cluster:

hardware:
2x identical nodes with i7, 16 GB RAM, 2 TB SSD, additional NIC with 2x 10 Gbe
1x Raspberry Pi as Qdevice
UPS

Is there a way to set up the cluster in a way that the two worker nodes are mirrored, so one can take over in case the other fails?

The PCs only have one M.2 SSD slot each, so is there a way to make a HA cluster work with only one storage device / zfs volume in each node?

Thanks in advance!


EDIT:

Found it - I think. Storage replication is what I'm looking for, right?
 
Last edited:
Hello!

Here to confirm that ZFS replication would definitely be a good bet for your use case. Ideally we would recommend dedicated nics for the replication traffic especially with NVMes. This will also allow you to set more frequent replication jobs so you can limit data loss in the case of a single worker node failure.

Hope that helps!
 
  • Like
Reactions: UdoB
It does, thank you!

I've equipped both hosts with dual 10 Gbit NICs for replication. Planning to use a short crossover cable for a direct connection between the two worker nodes, dedicated for the cluster exclusively. I hope that doesn't cause any issues and I can get away with the Qdevice connected via another subnet.
 
From what I have gathered the two worker nodes must have at least one integrated 1GB nic which could be connected to the same switch as your raspberry pi, please correct me if I am wrong. If that is the case you can have all of your Corosync traffic (which is what the Qdevice is partaking in) run over a Gigabit switch without issues. Direct connecting your Dual 10G nics in a bond to each other would be ideal. Any further nics, integrated or discrete, could then be used for guest traffic, or worst case guest traffic could be run over the host network along with the Corosync traffic if you are limited to only 1GB nic on each i7 box excluding the dual 10G cards.

In terms of subnetting you can use VLANs or separate physical switches, best practice would be to have your Corosync traffic on one subnet and on a separate physical interface, your ZFS replication and migration traffic on another subnet piggybacking off the dual 10G bond, guest traffic on a seperate subnet without access to the host ideally it's own separate interface, and the host itself on a separate subnet and physical interface. This will all come down to your nic count and how complex you want your VLANs to get.

Let me know.