Hello ,
I have 3 server clusters with ceph as shared storage.
We are trying to create ceph pool using class but during that activity server reboot and shows ceph health warning . I am new with ceph pools.
Following are the logs for your ref.
ceph -s
cluster:
id: 47061c54-d430-47c6-afa6-952da8e88877
health: HEALTH_WARN
Reduced data availability: 143 pgs inactive, 15 pgs incomplete, 128 pgs stale
Degraded data redundancy: 102410/425571 objects degraded (24.064%), 128 pgs degraded, 128 pgs undersized
139 slow ops, oldest one blocked for 92053 sec, daemons [osd.3,osd.4,osd.5] have slow ops.
services:
mon: 3 daemons, quorum 172,171,173 (age 26h)
mgr:172(active, since 7w), standbys: 173, 171
mds: 1/1 daemons up
osd: 6 osds: 6 up (since 25m), 6 in (since 25m)
data:
volumes: 1/1 healthy
pools: 6 pools, 465 pgs
objects: 141.86k objects, 554 GiB
usage: 1.1 TiB used, 6.9 TiB / 8.0 TiB avail
pgs: 30.753% pgs not active
102410/425571 objects degraded (24.064%)
322 active+clean
128 stale+undersized+degraded+peered
15 incomplete
ceph osd pool ls detail
pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 26 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr
pool 3 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode on last_change 54 flags hashpspool stripe_width 0 application cephfs
pool 4 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 55 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 5 'Storage2' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode off last_change 796 flags hashpspool,selfmanaged_snaps stripe_width 0 target_size_bytes 322122547200 application rbd
pool 7 'SSD-POOL' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode on last_change 1118 flags hashpspool,selfmanaged_snaps stripe_width 0 target_size_bytes 16106127360000 application rbd
pool 8 'HDD-1TB' replicated size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode on last_change 1116 flags hashpspool stripe_width 0 target_size_bytes 751619276800 application rbd
Please help me to understand Ceph along with a better way to manage it.
I have 3 server clusters with ceph as shared storage.
We are trying to create ceph pool using class but during that activity server reboot and shows ceph health warning . I am new with ceph pools.
Following are the logs for your ref.
ceph -s
cluster:
id: 47061c54-d430-47c6-afa6-952da8e88877
health: HEALTH_WARN
Reduced data availability: 143 pgs inactive, 15 pgs incomplete, 128 pgs stale
Degraded data redundancy: 102410/425571 objects degraded (24.064%), 128 pgs degraded, 128 pgs undersized
139 slow ops, oldest one blocked for 92053 sec, daemons [osd.3,osd.4,osd.5] have slow ops.
services:
mon: 3 daemons, quorum 172,171,173 (age 26h)
mgr:172(active, since 7w), standbys: 173, 171
mds: 1/1 daemons up
osd: 6 osds: 6 up (since 25m), 6 in (since 25m)
data:
volumes: 1/1 healthy
pools: 6 pools, 465 pgs
objects: 141.86k objects, 554 GiB
usage: 1.1 TiB used, 6.9 TiB / 8.0 TiB avail
pgs: 30.753% pgs not active
102410/425571 objects degraded (24.064%)
322 active+clean
128 stale+undersized+degraded+peered
15 incomplete
ceph osd pool ls detail
pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 26 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr
pool 3 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode on last_change 54 flags hashpspool stripe_width 0 application cephfs
pool 4 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 55 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 5 'Storage2' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode off last_change 796 flags hashpspool,selfmanaged_snaps stripe_width 0 target_size_bytes 322122547200 application rbd
pool 7 'SSD-POOL' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode on last_change 1118 flags hashpspool,selfmanaged_snaps stripe_width 0 target_size_bytes 16106127360000 application rbd
pool 8 'HDD-1TB' replicated size 3 min_size 2 crush_rule 2 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode on last_change 1116 flags hashpspool stripe_width 0 target_size_bytes 751619276800 application rbd
Please help me to understand Ceph along with a better way to manage it.
Last edited: