CEPH equivalent configuration for multiple hosts with local RAID

tawh

Active Member
Mar 26, 2019
23
0
41
34
I set a 3 proxmox hosts cluster and each host has 1x10TB HDD, CEPH configured with replication mode so effective storage in cluster is 10TB.
As the IO performance is poor, I am thinking to replace each 10TB disks with 3x6TB disks, I want maintain the host redundancy level like before but enable the RAID-5 like storage system in each host. After the replacement, the effective cluster storage is 2x6TB=12TB.

In CEPH, there is a Erasure code config with lRC plugin (https://docs.ceph.com/en/latest/rados/operations/erasure-code-lrc/), I am wondering if the
----------------------------------------------
k=2
m=1
l=3
crush-locality=host
crush-failure-domain=host
----------------------------------------------
is equivalent to have a cluster storage with replication among all hosts and each host has a RAID-5 like protection. Thanks
 
Last edited:
k=2 m=1 l=3
Only one OSD can die for a data chunk. Any subsequent failure may result in the loss of that chunk. Also the k+m needs to be a multiple of l.

In such a small setup, you will not gain much on space, let alone performance. Best stick with the 3/2 pool size and add more OSDs. This way you can be sure that your data is save in case of a node failure.
 
  • Like
Reactions: Stoiko Ivanov
Only one OSD can die for a data chunk. Any subsequent failure may result in the loss of that chunk. Also the k+m needs to be a multiple of l.

In such a small setup, you will not gain much on space, let alone performance. Best stick with the 3/2 pool size and add more OSDs. This way you can be sure that your data is save in case of a node failure.
Thanks for your reply.
To verify the correctness of my understanding, If I deploy this policy to 3 hosts and form a pool with 9 OSDs (as mentioned in Post #1), will CEPH replicate data to the other 2 hosts? or it just pick an arbitrary host to store the data?
 
To verify the correctness of my understanding, If I deploy this policy to 3 hosts and form a pool with 9 OSDs (as mentioned in Post #1), will CEPH replicate data to the other 2 hosts? or it just pick an arbitrary host to store the data?
The policy will be set on pool level, including all hosts participating. As the crush locality & failure-domain are on host, each chunk set will be placed on a different node.

Though still, my above warning stands.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!