Hey all,
I'm having trouble clearing some warnings from my ceph cluster.
1.)
HEALTH_WARN: Reduced data availability: 1 pg inactive
pg 1.0 is stuck inactive for 5m, current state unknown, last acting []
2.)
HEALTH_WARN: 2 slow ops, oldest one blocked for 299 sec, daemons [osd.0,osd.1] have slow ops.
___________________________________________
I have restarted osds, monitors, and managers.
I have tried: ceph pg repair 1.0
___________________________________________
Here is ceph status:
ceph status
cluster:
id: 34d69689-567b-4dac-8b75-382b7aa38dbe
health: HEALTH_WARN
Reduced data availability: 1 pg inactive
2 slow ops, oldest one blocked for 400 sec, daemons [osd.0,osd.1] have slow ops.
services:
mon: 3 daemons, quorum Lab-VMSvr03,Lab-VMSvr02,Lab-VMSvr01 (age 10m)
mgr: Lab-VMSvr03(active, since 7m), standbys: Lab-VMSvr01, Lab-VMSvr02
mds: 1/1 daemons up, 2 standby
osd: 3 osds: 3 up (since 6m), 3 in (since 9h)
data:
volumes: 1/1 healthy
pools: 4 pools, 169 pgs
objects: 144.68k objects, 564 GiB
usage: 1.6 TiB used, 9.3 TiB / 11 TiB avail
pgs: 0.592% pgs unknown
168 active+clean
1 unknown
io:
client: 2.7 KiB/s rd, 158 KiB/s wr, 0 op/s rd, 14 op/s wr
___________________________________________
Apparently 1.0 is my .mgr pool?
ceph osd lspools
1 .mgr
3 ceph-vm-disks
14 ceph-files_data
15 ceph-files_metadata
I'm having trouble clearing some warnings from my ceph cluster.
1.)
HEALTH_WARN: Reduced data availability: 1 pg inactive
pg 1.0 is stuck inactive for 5m, current state unknown, last acting []
2.)
HEALTH_WARN: 2 slow ops, oldest one blocked for 299 sec, daemons [osd.0,osd.1] have slow ops.
___________________________________________
I have restarted osds, monitors, and managers.
I have tried: ceph pg repair 1.0
___________________________________________
Here is ceph status:
ceph status
cluster:
id: 34d69689-567b-4dac-8b75-382b7aa38dbe
health: HEALTH_WARN
Reduced data availability: 1 pg inactive
2 slow ops, oldest one blocked for 400 sec, daemons [osd.0,osd.1] have slow ops.
services:
mon: 3 daemons, quorum Lab-VMSvr03,Lab-VMSvr02,Lab-VMSvr01 (age 10m)
mgr: Lab-VMSvr03(active, since 7m), standbys: Lab-VMSvr01, Lab-VMSvr02
mds: 1/1 daemons up, 2 standby
osd: 3 osds: 3 up (since 6m), 3 in (since 9h)
data:
volumes: 1/1 healthy
pools: 4 pools, 169 pgs
objects: 144.68k objects, 564 GiB
usage: 1.6 TiB used, 9.3 TiB / 11 TiB avail
pgs: 0.592% pgs unknown
168 active+clean
1 unknown
io:
client: 2.7 KiB/s rd, 158 KiB/s wr, 0 op/s rd, 14 op/s wr
___________________________________________
Apparently 1.0 is my .mgr pool?
ceph osd lspools
1 .mgr
3 ceph-vm-disks
14 ceph-files_data
15 ceph-files_metadata