Once I had 4 nodes running with Ceph and one could fail. Now that one has failed and is removed.
Is it possible to establish redundancy with the remaining 3 nodes?
From ceph -s
data:
volumes: 2/2 healthy
pools: 7 pools, 193 pgs
objects: 139.48k objects, 538 GiB
usage: 1.6...