I am very new to the ceph clustering configuration but it looks like only one daemon is actually running.
this is what ceph -s returns.
I have some scrubbing issues as well.
cluster:
id: 82f0f1b6-c47d-48dd-af90-c1899a8faccc
health: HEALTH_WARN
Degraded data redundancy: 12/72 objects degraded (16.667%), 6 pgs degraded, 61 pgs undersized
60 pgs not deep-scrubbed in time
60 pgs not scrubbed in time
services:
mon: 1 daemons, quorum pvenode01 (age 5d)
mgr: pvenode01(active, since 5d)
mds: 1/1 daemons up
osd: 3 osds: 3 up (since 2d), 3 in (since 2d); 100 remapped pgs
data:
volumes: 1/1 healthy
pools: 3 pools, 161 pgs
objects: 24 objects, 707 KiB
usage: 113 MiB used, 2.5 TiB / 2.5 TiB avail
pgs: 12/72 objects degraded (16.667%)
12/72 objects misplaced (16.667%)
100 active+clean+remapped
55 active+undersized
6 active+undersized+degraded
this is what ceph -s returns.
I have some scrubbing issues as well.
cluster:
id: 82f0f1b6-c47d-48dd-af90-c1899a8faccc
health: HEALTH_WARN
Degraded data redundancy: 12/72 objects degraded (16.667%), 6 pgs degraded, 61 pgs undersized
60 pgs not deep-scrubbed in time
60 pgs not scrubbed in time
services:
mon: 1 daemons, quorum pvenode01 (age 5d)
mgr: pvenode01(active, since 5d)
mds: 1/1 daemons up
osd: 3 osds: 3 up (since 2d), 3 in (since 2d); 100 remapped pgs
data:
volumes: 1/1 healthy
pools: 3 pools, 161 pgs
objects: 24 objects, 707 KiB
usage: 113 MiB used, 2.5 TiB / 2.5 TiB avail
pgs: 12/72 objects degraded (16.667%)
12/72 objects misplaced (16.667%)
100 active+clean+remapped
55 active+undersized
6 active+undersized+degraded