[SOLVED] Increase CEPH Replication during operation

Apr 17, 2023
10
4
3
Hello,

I have a cluster of 9 nodes, with a default crush map and a replication (size) of 3, and min_size of 2.

Are there any points against increasing the size to 5 and min_size 3 during operation to handle a higher failure of individual nodes?
Enough disk space is available, I am aware of short-term io drop and more network load.

What are your experiences with multiple tb of data?

thanks
 
Last edited: