ZFS High-Availability NAS

janos

Member
Aug 24, 2017
150
13
18
Hungary
Hi,

I surfed on the web, and i found this interesting project: https://github.com/ewwhite/zfs-ha/wiki

Basically this guy build a redundant ZFS based NFS (because he used vmware and he loves nfs better than iscsi) using two head node server and one (or more) JBOD storage box via SAS.

Anybody using these kind of solution? Even with NFS or iSCSI? (maybe that compatible with proxmox zfs over iscsi solution?)
 

alexskysilk

Active Member
Oct 16, 2015
576
61
28
Chatsworth, CA
www.skysilk.com
In the days before ceph this may have been worth doing, but is basically an obsolete solution. Since the project is made explicitly to serve as a vmware backing store it may still make sense for that use case (no native ceph support.)
 

janos

Member
Aug 24, 2017
150
13
18
Hungary
Hi,

I not fully agree with that. For this kind of money what these parts price (two head node, one jbob, HBA and sas cables), so for the same money yur ceph cluster will be useless, or more less power.

For small cluster, where you have proxmox and vmware also (eq. 2 proxmox and one esxi or windows) this will be good and cost efficiance solutions.
 

guletz

Active Member
Apr 19, 2017
942
124
43
Brasov, Romania
... but you can do a real 2 pof using 2 servers with or without pmx.
- install on both iscsi server on top of zfs
- create at least one lun on both
- on the client, use this 2 luns, and make a mirror (md)
- you can do the same with ata over ethernet

NFS or iscsi have each up and down. NFS is better because of cache so CTs can have better performance. iscsi is better for VM and for data safety (like data/header checks)

But like @alexskysilk have say, ceph it could be better if you have the money; )
 

janos

Member
Aug 24, 2017
150
13
18
Hungary
... but you can do a real 2 pof using 2 servers with or without pmx.
- install on both iscsi server on top of zfs
- create at least one lun on both
- on the client, use this 2 luns, and make a mirror (md)
- you can do the same with ata over ethernet

NFS or iscsi have each up and down. NFS is better because of cache so CTs can have better performance. iscsi is better for VM and for data safety (like data/header checks)

But like @alexskysilk have say, ceph it could be better if you have the money; )
What you wrote is good only if you want it use only a one node. You can no share these kind of softraid beetwen two or more server.
 

alexskysilk

Active Member
Oct 16, 2015
576
61
28
Chatsworth, CA
www.skysilk.com
Why? Can you explain please?
If your storage fails your whole cluster fails. The storage has no fault tolerance at the storage shelf level. To overcome that limit you'd need redundant controllers/clustered storage, and we come back full circle to ceph.

For small cluster, where you have proxmox and vmware also (eq. 2 proxmox and one esxi or windows) this will be good and cost efficiance solutions.
Lets examine this from two different perspectives.

To build a solution using two nodes and storage, you need two servers, a storage shelf, plus disks. Unless you're banking on the storage shelf being free, the costs are roughly comparable with 3 computes nodes with the same-ish number of disks between them. If cost is your primary driver, you'd be best served by two servers with their own storage, and use zfs-sync to keep them in (mostly) failover ready state.

If uptime and resiliency is important enough to merit additional spend, ceph is a better way then san+compute nodes because it has a higher failure domain fault tolerance, is more flexible, and is FAR easier to scale.
 

janos

Member
Aug 24, 2017
150
13
18
Hungary
You can add two JBOD ox. However, everything is redundant in the box, statring from the HDD (only if you using dual port SAS HDD offcourse), including controller and power.
 

alexskysilk

Active Member
Oct 16, 2015
576
61
28
Chatsworth, CA
www.skysilk.com
you could. And you've just doubled your cost, complexity, while still remaining limited in number of client nodes and scalability.

No one is trying to talk you out of your solution, but you should be aware of its limited scope and utility.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!