Ceph cluster failed

oscart

New Member
Feb 16, 2021
1
0
1
40
Hello All,

After a local-lvm datastore get full with some backups and deleted it, our ceph cluster went unresponsive, with the following status:

root@proxmox1:~# ceph -s
cluster:
id: 155d5b61-8198-434b-b29d-7e6edcf8e773
health: HEALTH_WARN
1 filesystem is degraded
1 MDSs report slow metadata IOs
mon proxmox3 is low on available space
1 osds down
2 hosts (3 osds) down
Reduced data availability: 118 pgs inactive
Degraded data redundancy: 177023/279384 objects degraded (63.362%), 45 pgs degraded, 129 pgs undersized
28 pgs not deep-scrubbed in time
18 daemons have recently crashed
2 slow ops, oldest one blocked for 1632 sec, mon.proxmox2 has slow ops

services:
mon: 3 daemons, quorum proxmox1,proxmox2,proxmox3 (age 9m)
mgr: proxmox1(active, since 25m), standbys: proxmox2, proxmox3
mds: 1/1 daemons up, 2 standby
osd: 5 osds: 2 up (since 67m), 3 in (since 66s); 13 remapped pgs

data:
volumes: 0/1 healthy, 1 recovering
pools: 5 pools, 129 pgs
objects: 93.13k objects, 360 GiB
usage: 375 GiB used, 2.5 TiB / 2.9 TiB avail
pgs: 91.473% pgs not active
177023/279384 objects degraded (63.362%)
11620/279384 objects misplaced (4.159%)
73 undersized+peered
43 undersized+degraded+peered
11 active+undersized+remapped
2 undersized+degraded+remapped+backfilling+peered

io:
recovery: 213 MiB/s, 53 objects/s


Anyone could help me, i dont know what to do
 
mon proxmox3 is low on available space

you need to free some space on proxmox3 /

osd: 5 osds: 2 up (since 67m), 3 in (since 66s); 13 remapped pgs
1 osds down
2 hosts (3 osds) down
you have 3 osd services down, you need to restart them.


18 daemons have recently crashed
seem that you have multiple osd daemon crash. (maybe out of memory oomkiller ?)
you can check that with : #ceph crash ls