Hey guys,
I have a ceph cluster with 3 OSDs on 3 nodes, 1 osd each node. 2 of the osds went offline and won't come back (pretty sure the disks died). 1 OSD is still alive with monitor. I can see the data from
And from
When I try to look at
I just want to get the data from the functioning OSD out since I used it as a file share on my CT. Would that be possible?
I have a ceph cluster with 3 OSDs on 3 nodes, 1 osd each node. 2 of the osds went offline and won't come back (pretty sure the disks died). 1 OSD is still alive with monitor. I can see the data from
ceph -s
:
Code:
id: c42a9057-9b43-4e68-afe8-d2cac60a8a6c
health: HEALTH_WARN
mon SpaceDewdy3 is low on available space
1 osds down
2 hosts (2 osds) down
Reduced data availability: 33 pgs inactive
Degraded data redundancy: 236254/354381 objects degraded (66.667%), 33 pgs degraded, 33 pgs undersized
33 pgs not deep-scrubbed in time
33 pgs not scrubbed in time
services:
mon: 3 daemons, quorum SpaceDewdy3,tasty2,capstone (age 29m)
mgr: SpaceDewdy3(active, since 45h), standbys: tasty2
osd: 3 osds: 1 up (since 29m), 2 in (since 37m)
data:
pools: 2 pools, 33 pgs
objects: 118.13k objects, 457 GiB
usage: 457 GiB used, 2.3 TiB / 2.7 TiB avail
pgs: 100.000% pgs not active
236254/354381 objects degraded (66.667%)
33 undersized+degraded+peered
And from
ceph osd tree
Code:
ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 8.18697 root default
-3 2.72899 host SpaceDewdy3
2 hdd 2.72899 osd.2 down 0 1.00000
-7 2.72899 host capstone
0 hdd 2.72899 osd.0 up 1.00000 1.00000
-9 0 host neocapstone
-5 2.72899 host tasty2
1 hdd 2.72899 osd.1 down 1.00000 1.00000
When I try to look at
rdb ls -l Ceph-3t
, it hangs. I think it might be corrupted.I just want to get the data from the functioning OSD out since I used it as a file share on my CT. Would that be possible?