glusterfs howto

Discussion in 'Proxmox VE: Installation and configuration' started by mir, Oct 1, 2013.

  1. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,481
    Likes Received:
    96
  2. symmcom

    symmcom Active Member

    Joined:
    Oct 28, 2012
    Messages:
    1,069
    Likes Received:
    24
    I tried GlusterFS before i settled down with CEPH. It worked ok. Almost every article i saw about GlusterFS, only had 2 node mirror setup. CEPH has the option to add nodes to cluster which increases overall Storage Space due to its Distributed File System. Is glusterfs only for mirroring ?
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,481
    Likes Received:
    96
  4. jinjer

    jinjer Member

    Joined:
    Oct 4, 2010
    Messages:
    194
    Likes Received:
    5
    What was the reason for you settling on ceph? was it a performance/feature reason or only the mis-understanding that gluster can run on two nodes only ?
     
  5. symmcom

    symmcom Active Member

    Joined:
    Oct 28, 2012
    Messages:
    1,069
    Likes Received:
    24
    Performance definitely was not the issue. Because i did not get that far to test much perfomance benchmarking. Misunderstanding gluster 2 node issue probably was the big reason, but i am glad right now that it happened. Unless i am still misunderstanding :)

    I find CEPH much more flexible in cluster environment. #1, i can run health monitor any given time and check how the entire storage cluster doing. #2 i do not have to worry about losing few hdds or even an entire node and taking the cluster down with it. #3 I like how CEPH uses all hdds to utilize overall storage space. #4 easy to expand over multiple nodes.

    May be gluster has same thing going for it, but i do not know. I am trying to give gluster a go to test, but having difficulties making it work with Proxomx. No doubt something not working from my side.

    Sent from my ASUS Transformer Pad TF700T using Tapatalk
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  6. jinjer

    jinjer Member

    Joined:
    Oct 4, 2010
    Messages:
    194
    Likes Received:
    5
    symmcom, I see where you come from. Many things have changed in 3.4 too and put me off until i started using the "new commands way".

    I have no experience with ceph. I'm trying to understand how to proceed before I build a test cluster. Normally my "clusters" are made of 3-4 nodes at max. Since zfsonlinux was declared "stable" I delegate the volume and share management to zfs and build things on top of that (either shared storage using gluster or plain local directories for everything else). This works nicely for off-site backups via zfs send/receive

    On one site where I have KVM machines running on top of native zfs exported via nfs from a solaris host. As this is a performance bottleneck and also a single point of failure I'm wondering if it will be better to go with gluster or ceph.
     
  7. deludi

    deludi New Member

    Joined:
    Oct 14, 2013
    Messages:
    26
    Likes Received:
    0
    Hi,

    i am also interested in either zfsonlinux + gluster or zfsonlinux + ceph.
    Have seen a screencast about both in which was stated that gluster was further with optimisations than ceph.

    Regards,

    Dirk Adamsky
     
  8. mir

    mir Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 14, 2012
    Messages:
    3,481
    Likes Received:
    96
    IMHO ZFS on-top ceph or gluster is a disaster waiting to happen.
     
  9. jinjer

    jinjer Member

    Joined:
    Oct 4, 2010
    Messages:
    194
    Likes Received:
    5
    Welcome aboard. The thing with ceph is that is seems the "promise land of the future". However, all the benchmarks I see do not take into account a real cluster with several nodes connected running some real load. All you get is a single OSD runnings against a single client with some concurrency.

    I have no idea how it will perform in real world (and I think this is the main reason that symcomm is running an in-house beta and benchmark tests with real users).

    Also, zfsonlinux might not be the perfect match for ceph. All benchmarks concentrate on ext4 vs btrfs vs xfs right now. For rbd (which is the way proxmox is using it as I understand) the consensus is that either btrfs or xfs will do (with xfs being preferred).
     
  10. jinjer

    jinjer Member

    Joined:
    Oct 4, 2010
    Messages:
    194
    Likes Received:
    5
    I think he's talking the other way around: ceph on top of zfs.
     
  11. symmcom

    symmcom Active Member

    Joined:
    Oct 28, 2012
    Messages:
    1,069
    Likes Received:
    24
    Couldnt agree more. zfs on ceph/gluster or ceph/gluster on zfs adds way too much layers to be used in any real environment. They all are separate technologies with their own agenda and should be left alone that way. But mixing ceph,gluster,zfs there really no added advantage.

    Yes, CEPH do not have the worlds best I/O performance, but it does have the strength of redundancy and ease of setup/monitor. I would not call it the Promise Land of future, because all these technologies applies to specific situation. There are no one-solution-fits-all. In real enterprise situation, i will take 0 downtime over insane i/o performance any day. If any VM needs insane i/o performance i can always put those VM on SSD in CEPH and get super fast i/o. So in that sense CEPH gives me best of the both world. Redundancy and speed in same cluster. CEPH Pools allows to create tier based storage for multi tier customers. In less than 6 seconds i can see the performance and health of entire CEPH storage cluster. Yes, the 6 seconds was timed using a stopwatch. :) As an admin this is a blessing of promise land.

    Sent from my ASUS Transformer Pad TF700T using Tapatalk
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  12. jinjer

    jinjer Member

    Joined:
    Oct 4, 2010
    Messages:
    194
    Likes Received:
    5
    symmcom,

    care to tell us what type of hardware you're running your cluster on?

    thank you.
     
  13. symmcom

    symmcom Active Member

    Joined:
    Oct 28, 2012
    Messages:
    1,069
    Likes Received:
    24
    jinjer, the hardware is same as you already saw in the posting about ceph large file. Gluster setup is on just one single node. Since i just want to test proxmox performance with gluster, so only one node is setup.

    Sent from my ASUS Transformer Pad TF700T using Tapatalk
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice