Hello all,
'Quick question.
So, I know that Proxmox VE includes both Ceph, and GlusterFS support... however, I get the impression (and correct me if I am wrong on this) that Ceph is being pushed as the de-facto choice for HA/Clusters needing shared storage.
Red Hat however seems to favor GlusterFS for use cases like this, with Ceph being more suited for large Object Store systems in an OpenStack deployment.
Is there any particular reason for this?
I'm just curious, and would love to hear from anyone who has had positive, or negative experiences with either Ceph or GlusterFS in a shared storage / cluster setup...
...and while I have your ear, I'd also LOVE to hear from anyone who has experience or knowledge regarding my two absolute DREAM setups.
#1 - I have long used ZFS over NFS for storing both my actual virtual machine images/vmdk's, and their service storage (mail server mailstore/db). The ability to create volumes that stripe across multiple underlying mirrors (such as, say 5 separate vdev's which consist of 3 drives mirrored each), allows for much faster rebuild times when swapping out drives, much better overall volume performance than a traditional RAIDz2 / RAID6 volume, and the copy on write, full volume data-checksumming, and snapshotting make it as close to perfect for direct-attached storage that I can imagine. I LOVE it. However, I really, REALLY, long for an actual SAN system... such as say Quantum's StorNext cluster-filesystem (Apple uses it under the marketing name "Xsan" also)... that would allow for SAN like no single point of failure setups, BUT, ALSO have all of those ZFS style features. I haven't come across it yet.
#2 - I really, REALLY can't stop longing for the day when, macOS Sierra (or as time moves forward, whatever the current macOS is) can run on Proxmox in a fully supported, solid, and stable manner, not requiring special work-around's. Contrary to popular belief the EULA does NOT prohibit this. VMware's underlying hypervisor used by vSphere, ESXi, and Fusion, has supported this for nearly a decade now. You just need to be using Apple hardware underneath to satisfy the licensing.
So anyway... I'd love to hear what any of you think regarding Ceph vs GlusterFS, or also in regards to my two dream setups.
'Quick question.
So, I know that Proxmox VE includes both Ceph, and GlusterFS support... however, I get the impression (and correct me if I am wrong on this) that Ceph is being pushed as the de-facto choice for HA/Clusters needing shared storage.
Red Hat however seems to favor GlusterFS for use cases like this, with Ceph being more suited for large Object Store systems in an OpenStack deployment.
Is there any particular reason for this?
I'm just curious, and would love to hear from anyone who has had positive, or negative experiences with either Ceph or GlusterFS in a shared storage / cluster setup...
...and while I have your ear, I'd also LOVE to hear from anyone who has experience or knowledge regarding my two absolute DREAM setups.
#1 - I have long used ZFS over NFS for storing both my actual virtual machine images/vmdk's, and their service storage (mail server mailstore/db). The ability to create volumes that stripe across multiple underlying mirrors (such as, say 5 separate vdev's which consist of 3 drives mirrored each), allows for much faster rebuild times when swapping out drives, much better overall volume performance than a traditional RAIDz2 / RAID6 volume, and the copy on write, full volume data-checksumming, and snapshotting make it as close to perfect for direct-attached storage that I can imagine. I LOVE it. However, I really, REALLY, long for an actual SAN system... such as say Quantum's StorNext cluster-filesystem (Apple uses it under the marketing name "Xsan" also)... that would allow for SAN like no single point of failure setups, BUT, ALSO have all of those ZFS style features. I haven't come across it yet.
#2 - I really, REALLY can't stop longing for the day when, macOS Sierra (or as time moves forward, whatever the current macOS is) can run on Proxmox in a fully supported, solid, and stable manner, not requiring special work-around's. Contrary to popular belief the EULA does NOT prohibit this. VMware's underlying hypervisor used by vSphere, ESXi, and Fusion, has supported this for nearly a decade now. You just need to be using Apple hardware underneath to satisfy the licensing.
So anyway... I'd love to hear what any of you think regarding Ceph vs GlusterFS, or also in regards to my two dream setups.