Question about CEPH Topology

bryambalan

Member
Jul 25, 2020
15
0
21
27
Hi everyone,

I would like some help regarding CEPH topology.

I have the following environment:
- 5x Servers (PVE01,02,03,04,05)
- PVE 01,02, and 03 in one datacenter and PVE04 and 05 in another datacenter.
- 6x Disks in each (3x HDD and 3x SSD)
- All of the same capacity/model.

I would like to create a topology using CEPH storage, where I can lose up to 2 servers.

Here comes the first question: is it possible for my storage to work with only 2 nodes?

Another question: should I change anything in my CrushMap?

Initially, I only created a replicate rule to differentiate the HDD disks from the SSD disks.

I will then create two pools:
Storage-SSD and Storage-HDD.

How much size/min.sizedo I have to put in order to be able to lose up to 2 servers?
1717443766949.png
1717443745282.png
1717443708453.png
 

Attachments

Here comes the first question: is it possible for my storage to work with only 2 nodes?
No.

If Ceph needs to work with only these two nodes being alive then ALL of your data needs to be available on these two nodes. A simpel fact, isn't it?

You would need to set "size=5,min_size=2 to achieve this. With this you get "5 - 2 = 3" --> three nodes may fail.

(Actually I am not sure if that would work at all as the majority of systems is gone in that scenario. I am not a Ceph specialist.)


Good luck!

PS: Split brain is a different beast for PVE and for Ceph. Do not assume the same behavior for both independent software stacks.
 
short response: you need 3 datacenters for the monitors, and you can loose 1dc. (as you always need quorum for monitor)

the osd can be located on 2 datacenters (replicat4 for example, 2 copies on each dc)