ZFS mirror across hosts?

barchetta

New Member
Apr 3, 2022
15
0
1
Im new to ZFS and about to set up a home lab. I have 2 Proxmox hosts now. I wonder if the following would work..

on host01 I have a 4tb spin drive
On host02 I have a 4tb USB 3.2 ssd drive
(and other drives including one for prox install on each host)
I know you are supposed to have 3 hosts for a cluster but could I still mirror the two accross a 2.5gbe ethernet? Would I be able to mirror or do I have to replicate VMs and containers?

I want to be able to lose a host and/or drive and be ok. Id prefer to just mirror the entire drive but reading through docs that doesnt seem do able without a complex setup.. and I want to keep it simple.
 
ZFS is not a cluster filesystem and is limited to the single host. What you can do is using replication to keep two ZFS pools in sync and so kind of mirrored. But smallest replication interval is 1 minute (so one minute of dataloss in case a node fails) and IO won't be done instantly in parallel like when using a real mirror. To mirror storage across the network in real-time you would need something like ceph. And that only starts being useful at 10+Gbit, multiple (like 3+) SSDs per node and requires a minimum of 3 nodes (better way more).
 
Last edited:
ZFS is not a cluster filesystem and is limited to the single host. What you can do is using replication to keep two ZFS pools in sync and so kind of mirrored. But smallest replication interval is 1 minute (so one minute of dataloss in case a node fails) and IO won't be done instantly in parallel like when using a real mirror. To mirror storage across the network you would need something like ceph (and that only starts being useful at 10Gbit and up with 3 nodes and 3+ SSDs per node).
Thank you.. this spells it out. So drive size doesnt enter into this at all (they dont have to be the same).. they would need to be the same size and have same name only using zfs on a single host. I think I have that right.
 
There was a technology that did exactly what you asked for: DRBD (instead of ZFS), yet AFAIK it is not supported anymore.
 
There was a technology that did exactly what you asked for: DRBD (instead of ZFS), yet AFAIK it is not supported anymore.
Thanks. I think this will work for my purpose, I mostly have a few VMs/CTs that Id like HA for. And if I dont use passthrough drive for my NAS and just use a VM drive, I can replicate it as well.

I dont even need right to the second states.. an hour will do fine.. in the case of my pfsense firewall, it can be last week for all I care.

That being said, Im learning a lot about ZFS, lots of nuances like being able to store more than just VMs by using the CLI and creating sub pools (forget what they are called now).. Im trying to future proof myself best I can.. last go around I really sliced up a 4tb spinner and made a real mess.. I want one big pot which ZFS seems to cover. I love how ZFS makes a slow device seemingly fast by using RAM.. granted you need to load up on RAM to really take advantage of this. But in the end you save money on the cost of the device and move what you saved over to RAM. In a home lab situation, you can take what you have laying around and boost its performance with ZFS/RAM. This is my take on it anyway.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!