Ceph is the redundancy! At least it should be.I like the idea of the CEPH drives having extra redundancy.
Ahh yes I did note that. I come from a VMWare VSAN background. We are planning on launching a 4 node cluster, for extra redundancy. Does this mean that we could configure 2 physical disks in each server with the ability to loose a single disk per node?One more note: For Ceph to work correctly you need a cluster of at least "3" nodes!
Does this mean that we could configure 2 physical disks in each server with the ability to loose a single disk per node?
The short answer is- dont share a db device across multiple OSDs unless you have a sufficiently large deployment. If the deployment is large enough, multiple osd's out on a node does not pose significant risk.I have seen that if the ssd disk that I use to save the ceph db breaks, it completely drops all the OSDs that are linked to that ceph db.
If the db device is truly dead, you'll need to wipe all osds and recreate. if its alive and present, just rescan the lvms and bring them back online (a reboot is the simplest way to accomplish this.)what would be the procedure that must be carried out so that the OSDs are online?