Ceph goes read-only when only 1 of 3 nodes goes down

Meowcat285

Member
May 28, 2020
4
2
23
30
When I testing what happens if a node goes down, ceph went read-only. I thought that with 3 replicas across 3 nodes should allow for one node to go down without interruption. I do get a Reduced data availability: 41 pgs inactive warning, but again, I thought that having three replicas should allow for one node to go down without issues. Am I doing something wrong? I put my config below


C-like:
[global]
     auth_client_required = cephx
     auth_cluster_required = cephx
     auth_service_required = cephx
     cluster_network = 10.0.20.150/24
     fsid = ea0e8911-cf17-4986-85a8-e5b0f5b5b53c
     mon_allow_pool_delete = true
     mon_host = 10.0.20.150 10.0.20.133 10.0.20.134
     ms_bind_ipv4 = true
     ms_bind_ipv6 = false
     osd_pool_default_min_size = 2
     osd_pool_default_size = 2
     public_network = 10.0.20.150/24

[client]
     keyring = /etc/pve/priv/$cluster.$name.keyring

[mds]
     keyring = /var/lib/ceph/mds/ceph-$id/keyring

[mds.pve1]
     host = pve1
     mds_standby_for_name = pve

[mds.pve2]
     host = pve2
     mds_standby_for_name = pve

[mds.pve3]
     host = pve3
     mds_standby_for_name = pve

[mon.pve1]
     public_addr = 10.0.20.150

[mon.pve2]
     public_addr = 10.0.20.133

[mon.pve3]
     public_addr = 10.0.20.134
 
Ceph requires three nodes to work, as has been mentioned many times on this forum. If you want Ceph with redundancy, you need more than three nodes.