Ceph 2+4 layout considerations

Alphaphi

New Member
Oct 8, 2024
2
0
1
Hi yall,

I'm thinking about creating a Ceph pool with a EC 2+4 scheme. Even though I did intensive Google reserch, I could not find any experience with that.

My idea is this:

The Ceph cluster is spread across two fault domains (latency < 1 ms, 40 disks on each side, all NVME SSDs, lots of CPU and RAM). With a 2+4 layout, I end up with 6 shards, three in each fault domain. Now if I lose one side, I still have 3 shards left in the surviving side, which means, there still is some redundancy.

The storage efficiency is 33%, which is better than if I did a replication with size=4 (2 on each side).

I haven't found anything where anyone has done this, or where anyone used a setup where k < m.

Is my idea good, bad, absurd? Appreciate any comments.