Currently, I have 1 disk in each server(total 3) used for Ceph shared storage. I noticed that if 2 out of 3 disks are not working then complete ceph will also stop working.
I am not sure if the same (n/2 +1) formula works here as well.
Why It is concerning is because we are using CephFS as well.
Does that mean that I need 6 disks for 2 disk failure or just 4?
I am not sure if the same (n/2 +1) formula works here as well.
Why It is concerning is because we are using CephFS as well.
Does that mean that I need 6 disks for 2 disk failure or just 4?
Last edited: