Hello, i'm testing a 4 nodes Ceph Cluster:
Each node have two sata HD and two SSD for Journal.
-----------------------------------------------------------------------------------
ceph -w
cluster 1126f843-c89b-4a28-84cd-e89515b10ea2
health HEALTH_OK
monmap e4: 4 mons at {0=10.10.10.1:6789/0,1=10.10.10.2:6789/0,2=10.10.10.3:6789/0,3=10.10.10.4:6789/0}
election epoch 150, quorum 0,1,2,3 0,1,2,3
osdmap e359: 8 osds: 8 up, 8 in
flags sortbitwise,require_jewel_osds
pgmap v28611: 512 pgs, 1 pools, 15124 MB data, 3869 objects
45780 MB used, 29748 GB / 29793 GB avail
512 active+clean
client io 817 B/s wr, 0 op/s rd, 0 op/s wr
-----------------------------------------------------------------------------------
I have tested the fail of 2 Nodes and the cluster go away.
With a 3/2 Pool, how many OSD can i lost ? ( how can i calculate it ? )
Each node have two sata HD and two SSD for Journal.
-----------------------------------------------------------------------------------
ceph -w
cluster 1126f843-c89b-4a28-84cd-e89515b10ea2
health HEALTH_OK
monmap e4: 4 mons at {0=10.10.10.1:6789/0,1=10.10.10.2:6789/0,2=10.10.10.3:6789/0,3=10.10.10.4:6789/0}
election epoch 150, quorum 0,1,2,3 0,1,2,3
osdmap e359: 8 osds: 8 up, 8 in
flags sortbitwise,require_jewel_osds
pgmap v28611: 512 pgs, 1 pools, 15124 MB data, 3869 objects
45780 MB used, 29748 GB / 29793 GB avail
512 active+clean
client io 817 B/s wr, 0 op/s rd, 0 op/s wr
-----------------------------------------------------------------------------------
I have tested the fail of 2 Nodes and the cluster go away.
With a 3/2 Pool, how many OSD can i lost ? ( how can i calculate it ? )