glusterfs howto

I tried GlusterFS before i settled down with CEPH. It worked ok. Almost every article i saw about GlusterFS, only had 2 node mirror setup. CEPH has the option to add nodes to cluster which increases overall Storage Space due to its Distributed File System. Is glusterfs only for mirroring ?
 
I tried GlusterFS before i settled down with CEPH. It worked ok. Almost every article i saw about GlusterFS, only had 2 node mirror setup. CEPH has the option to add nodes to cluster which increases overall Storage Space due to its Distributed File System. Is glusterfs only for mirroring ?
What was the reason for you settling on ceph? was it a performance/feature reason or only the mis-understanding that gluster can run on two nodes only ?
 
What was the reason for you settling on ceph? was it a performance/feature reason or only the mis-understanding that gluster can run on two nodes only ?

Performance definitely was not the issue. Because i did not get that far to test much perfomance benchmarking. Misunderstanding gluster 2 node issue probably was the big reason, but i am glad right now that it happened. Unless i am still misunderstanding :)

I find CEPH much more flexible in cluster environment. #1, i can run health monitor any given time and check how the entire storage cluster doing. #2 i do not have to worry about losing few hdds or even an entire node and taking the cluster down with it. #3 I like how CEPH uses all hdds to utilize overall storage space. #4 easy to expand over multiple nodes.

May be gluster has same thing going for it, but i do not know. I am trying to give gluster a go to test, but having difficulties making it work with Proxomx. No doubt something not working from my side.

Sent from my ASUS Transformer Pad TF700T using Tapatalk
 
symmcom, I see where you come from. Many things have changed in 3.4 too and put me off until i started using the "new commands way".

I have no experience with ceph. I'm trying to understand how to proceed before I build a test cluster. Normally my "clusters" are made of 3-4 nodes at max. Since zfsonlinux was declared "stable" I delegate the volume and share management to zfs and build things on top of that (either shared storage using gluster or plain local directories for everything else). This works nicely for off-site backups via zfs send/receive

On one site where I have KVM machines running on top of native zfs exported via nfs from a solaris host. As this is a performance bottleneck and also a single point of failure I'm wondering if it will be better to go with gluster or ceph.
 
Hi,

i am also interested in either zfsonlinux + gluster or zfsonlinux + ceph.
Have seen a screencast about both in which was stated that gluster was further with optimisations than ceph.

Regards,

Dirk Adamsky
 
Hi,

i am also interested in either zfsonlinux + gluster or zfsonlinux + ceph.
Have seen a screencast about both in which was stated that gluster was further with optimisations than ceph.

Regards,

Dirk Adamsky
Welcome aboard. The thing with ceph is that is seems the "promise land of the future". However, all the benchmarks I see do not take into account a real cluster with several nodes connected running some real load. All you get is a single OSD runnings against a single client with some concurrency.

I have no idea how it will perform in real world (and I think this is the main reason that symcomm is running an in-house beta and benchmark tests with real users).

Also, zfsonlinux might not be the perfect match for ceph. All benchmarks concentrate on ext4 vs btrfs vs xfs right now. For rbd (which is the way proxmox is using it as I understand) the consensus is that either btrfs or xfs will do (with xfs being preferred).
 
IMHO ZFS on-top ceph or gluster is a disaster waiting to happen.

Couldnt agree more. zfs on ceph/gluster or ceph/gluster on zfs adds way too much layers to be used in any real environment. They all are separate technologies with their own agenda and should be left alone that way. But mixing ceph,gluster,zfs there really no added advantage.

Yes, CEPH do not have the worlds best I/O performance, but it does have the strength of redundancy and ease of setup/monitor. I would not call it the Promise Land of future, because all these technologies applies to specific situation. There are no one-solution-fits-all. In real enterprise situation, i will take 0 downtime over insane i/o performance any day. If any VM needs insane i/o performance i can always put those VM on SSD in CEPH and get super fast i/o. So in that sense CEPH gives me best of the both world. Redundancy and speed in same cluster. CEPH Pools allows to create tier based storage for multi tier customers. In less than 6 seconds i can see the performance and health of entire CEPH storage cluster. Yes, the 6 seconds was timed using a stopwatch. :) As an admin this is a blessing of promise land.

Sent from my ASUS Transformer Pad TF700T using Tapatalk
 
symmcom,

care to tell us what type of hardware you're running your cluster on?

thank you.
 
symmcom,

care to tell us what type of hardware you're running your cluster on?

thank you.

jinjer, the hardware is same as you already saw in the posting about ceph large file. Gluster setup is on just one single node. Since i just want to test proxmox performance with gluster, so only one node is setup.

Sent from my ASUS Transformer Pad TF700T using Tapatalk
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!