ZFS High-Availability NAS

Discussion in 'Proxmox VE: Installation and configuration' started by janos, Apr 30, 2019.

  1. janos

    janos Member

    Joined:
    Aug 24, 2017
    Messages:
    147
    Likes Received:
    13
    Hi,

    I surfed on the web, and i found this interesting project: https://github.com/ewwhite/zfs-ha/wiki

    Basically this guy build a redundant ZFS based NFS (because he used vmware and he loves nfs better than iscsi) using two head node server and one (or more) JBOD storage box via SAS.

    Anybody using these kind of solution? Even with NFS or iSCSI? (maybe that compatible with proxmox zfs over iscsi solution?)
     
  2. alexskysilk

    alexskysilk Active Member

    Joined:
    Oct 16, 2015
    Messages:
    555
    Likes Received:
    58
    In the days before ceph this may have been worth doing, but is basically an obsolete solution. Since the project is made explicitly to serve as a vmware backing store it may still make sense for that use case (no native ceph support.)
     
  3. janos

    janos Member

    Joined:
    Aug 24, 2017
    Messages:
    147
    Likes Received:
    13
    Hi,

    I not fully agree with that. For this kind of money what these parts price (two head node, one jbob, HBA and sas cables), so for the same money yur ceph cluster will be useless, or more less power.

    For small cluster, where you have proxmox and vmware also (eq. 2 proxmox and one esxi or windows) this will be good and cost efficiance solutions.
     
  4. guletz

    guletz Active Member

    Joined:
    Apr 19, 2017
    Messages:
    862
    Likes Received:
    115
    Hi,

    But this dual head storage is a single point of failure.
     
  5. guletz

    guletz Active Member

    Joined:
    Apr 19, 2017
    Messages:
    862
    Likes Received:
    115
    ... but you can do a real 2 pof using 2 servers with or without pmx.
    - install on both iscsi server on top of zfs
    - create at least one lun on both
    - on the client, use this 2 luns, and make a mirror (md)
    - you can do the same with ata over ethernet

    NFS or iscsi have each up and down. NFS is better because of cache so CTs can have better performance. iscsi is better for VM and for data safety (like data/header checks)

    But like @alexskysilk have say, ceph it could be better if you have the money; )
     
  6. janos

    janos Member

    Joined:
    Aug 24, 2017
    Messages:
    147
    Likes Received:
    13
    What you wrote is good only if you want it use only a one node. You can no share these kind of softraid beetwen two or more server.
     
  7. janos

    janos Member

    Joined:
    Aug 24, 2017
    Messages:
    147
    Likes Received:
    13
    Why? Can you explain please?
     
  8. alexskysilk

    alexskysilk Active Member

    Joined:
    Oct 16, 2015
    Messages:
    555
    Likes Received:
    58
    If your storage fails your whole cluster fails. The storage has no fault tolerance at the storage shelf level. To overcome that limit you'd need redundant controllers/clustered storage, and we come back full circle to ceph.

    Lets examine this from two different perspectives.

    To build a solution using two nodes and storage, you need two servers, a storage shelf, plus disks. Unless you're banking on the storage shelf being free, the costs are roughly comparable with 3 computes nodes with the same-ish number of disks between them. If cost is your primary driver, you'd be best served by two servers with their own storage, and use zfs-sync to keep them in (mostly) failover ready state.

    If uptime and resiliency is important enough to merit additional spend, ceph is a better way then san+compute nodes because it has a higher failure domain fault tolerance, is more flexible, and is FAR easier to scale.
     
  9. janos

    janos Member

    Joined:
    Aug 24, 2017
    Messages:
    147
    Likes Received:
    13
    You can add two JBOD ox. However, everything is redundant in the box, statring from the HDD (only if you using dual port SAS HDD offcourse), including controller and power.
     
  10. alexskysilk

    alexskysilk Active Member

    Joined:
    Oct 16, 2015
    Messages:
    555
    Likes Received:
    58
    you could. And you've just doubled your cost, complexity, while still remaining limited in number of client nodes and scalability.

    No one is trying to talk you out of your solution, but you should be aware of its limited scope and utility.
     
  11. fireon

    fireon Well-Known Member
    Proxmox Subscriber

    Joined:
    Oct 25, 2010
    Messages:
    2,935
    Likes Received:
    172
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice