Sharing Storage (Ceph or Gluster) for VM's in a 3 node scenario...

Been working with this full mesh 3-node Proxmox cluster for about a month or so.
I don't recommend this. If you need to add an additional Node, then you have fun :)

Nice to hear. Use one NIC for the cluster of the nodes, and the other 10 GB NIC for the Ceph Network. This is what i would do in your scenario.
I don't recommend to use only single links, use LACP instead. Im working in a Datacenter, were running multple PVE Clusters and CEPH Storages, we do not use single Links, we use 2x 10GbE for all the Traffic. We do not have an issues up to today with this Setup, and if a Stack Member will fail, you have the second Link and do not have an downtimes. Keep in mind, if all your Links are on one Switch, you CEPH Cluster will immediatly came to stop working.
 
Nice to hear. Use one NIC for the cluster of the nodes, and the other 10 GB NIC for the Ceph Network. This is what i would do in your scenario.
In my tests in our environment, i have up to 100K IOPS over the ceph network storage.
But don´t use any raid for the 5 300 GB SAS Disks. Maybe a SSD per node for the journal could tune up the performance for the ceph storage, if you choose ceph instead of gluster.

best regards

SAS disks are obviously configured as JBOD, direct attached mode, no raid whatsoever. I purchased 3 Perc H200 on purpose.
I'm little bit concerned about the fact that almost everyone said that Ceph on only 3 nodes is not performed well as Gluster.
Anyway, I do a test witch Ceph using all Fiber Nic's for ceph storage (I bond the 2 channel on the switch for all 3 nodes with LACP) and it seems to work very good.
Live migration of a VM from node to another is fast as hell without loose a packet ping...
Do you think a SSD disk per node is mandatory? I will use this project for hosting KVM machines with internal developing server, maybe internal mail and services (some Windows and Linux VM's). Very straightforward environment, no special superfast requirements here...

Anyway, before go on production I will give a try also on Gluster just for a quick compare...
 
I don't recommend to use only single links, use LACP instead. Im working in a Datacenter, were running multple PVE Clusters and CEPH Storages, we do not use single Links, we use 2x 10GbE for all the Traffic. We do not have an issues up to today with this Setup, and if a Stack Member will fail, you have the second Link and do not have an downtimes. Keep in mind, if all your Links are on one Switch, you CEPH Cluster will immediatly came to stop working.

Sure I will use LACP...
For the time being I have only one switch and is clearly a single point of failure, i know that...
A second switch however is on the way...
How do I will connect 3 doble nic on 2 switch?
 
Last edited:
yes, thats very fast and without any outtime the live migration, thats right.
What you wanna do with an SSD? for the proxmox os or for local replication jobs (ZFS only)? Or you mean one SSD per node for journal (ceph)?
 
How do I will connect 3 doble nic on 2 switch?
Hope you bought Switches with Stacking and Multi Chassis LA :)
Otherwise you need to Interconnect to both Switches with 4 - 8x 1GbE Links, and use Active-Backup.

So in 3 node scenario Ceph is really not recommended?
I don't recommend a full mesh setup. You can go with CEPH and 3 Nodes without any Problems.
 
yes, thats very fast and without any outtime the live migration, thats right.
What you wanna do with an SSD? for the proxmox os or for local replication jobs (ZFS only)? Or you mean one SSD per node for journal (ceph)?

SSD only for journaling... But I don't know if it's a requiremend or just an advice in my little 3node environment
 
Ok guys, I miss the part with mesh that is not recommended with only 3 nodes...
I will go with Ceph at 90%.
I will give a try also to Gluster, just for fun :D
 
Hello everyone,

I'm just starting with Proxmox and I woulds like to replace 2 HyperV Failover clusters with it. Thee HyperV clusters only have 2 nodes and a NAS for a shared storage between them (2 nodes of each cluster). I know i can easily replicate that with proxmox but I don't like the idea of a unique shared storage, so i'm looking to have 2 clusters of 2 nodes each using internal storage.

One cluster is 2x Dell R720 SFF and another is 2x R710 LFF.

1 cluster is for dev VMs and another basically for a file server and domain controller.

Would Gluster be a good solution for this scenario of 2+2 (since Ceph requires 3 nodes for failover to work)?

Many thanks in advance.
 
Just some other thoughts after few very quick raw tests.

- Gluster is simpler but tedious to configure. Format all bricks mount them all and then create a gluster above it, while Ceph is much more assisted on proxmox platform during the installation.
- On this 3 node configuration Gluster if effectively more faster than Ceph. Live migration of a KVM VM is *really* smoking fast, maybe 2/3 time faster than Ceph.
Ceph need more tweaking to be fast as gluster ad maybe sacrifice some space and disks for host journaling on faster media.
In a little environment like this one, can lead to a lack of resources.

About disk space and resource: my 15 * 300GB SAS disk will produce 1,3 TB of space available in both test: Ceph and GlusterFs.
It's about 1/3... It's all right or I miss something?
 
Last edited:
I know this post is a bit old, however, I want to let you know that I've used Gluster with oVirt for about 5 years in production: 3 node clusters with distributed replicated volumes (Replica 3, no arbiter).

I've switched to using Proxmox+Gluster recently , and yes, if you want replicated storage you need increments of 3 bricks within gluster, have had no issues so far, my VMs always come back online, no matter which scenario (Network issue, power issue, HW failure, etc..)

However your scenario might require other solutions (you just need to properly plan).

So far for me:
- Snapshots ok
- VM Migrations ok
- VM Import ok
- VM replication ok
- VM resource increase ok
- Recovery ok
- Etc ....

Again, this is my own experience, and Glusterfs might not work for all scenarios.

Setting this up was fast and easy and documentation from Proxmox is great.

If you have any suggestion, please let me know.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!