Once I had 4 nodes running with Ceph and one could fail. Now that one has failed and is removed.
Is it possible to establish redundancy with the remaining 3 nodes?
From
how to read the numbers on "usage"?
Is it possible to establish redundancy with the remaining 3 nodes?
From
ceph -s
Code:
data:
volumes: 2/2 healthy
pools: 7 pools, 193 pgs
objects: 139.48k objects, 538 GiB
usage: 1.6 TiB used, 1.2 TiB / 2.8 TiB avail
pgs: 193 active+clean
how to read the numbers on "usage"?