ceph slow_ops occured but fast

cola16

Member
Feb 2, 2024
45
2
8
I got 'HEALTH_WARN: 20 slow ops, oldest one blocked for 84 sec, daemons [osd.57,osd.58,osd.59,osd.66,osd.67,osd.68,osd.77,osd.79] have slow ops.'
However, the latency of OSD is very low. just 2ms
What should I check?

1741684367094.png

1741684386790.png

Code:
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54

# devices
device 11 osd.11 class clay
device 12 osd.12 class ssd
device 13 osd.13 class clay
device 14 osd.14 class ssd
device 15 osd.15 class clay
device 16 osd.16 class clay
device 21 osd.21 class clay
device 22 osd.22 class ssd
device 23 osd.23 class clay
device 24 osd.24 class ssd
device 25 osd.25 class clay
device 26 osd.26 class clay
device 31 osd.31 class clay
device 32 osd.32 class ssd
device 33 osd.33 class clay
device 34 osd.34 class ssd
device 35 osd.35 class clay
device 36 osd.36 class clay
device 55 osd.55 class clay
device 56 osd.56 class destroy
device 57 osd.57 class clay
device 58 osd.58 class destroy
device 59 osd.59 class clay
device 64 osd.64 class clay
device 65 osd.65 class destroy
device 66 osd.66 class clay
device 67 osd.67 class clay
device 68 osd.68 class clay
device 76 osd.76 class clay
device 77 osd.77 class clay
device 78 osd.78 class destroy
device 79 osd.79 class clay

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 zone
type 10 region
type 11 root

# buckets
host pve1 {
    id -3        # do not change unnecessarily
    id -32 class clay        # do not change unnecessarily
    id -13 class ssd        # do not change unnecessarily
    id -15 class destroy        # do not change unnecessarily
    # weight 1.84119
    alg straw2
    hash 0    # rjenkins1
    item osd.12 weight 0.93149
    item osd.14 weight 0.90970
}
host pve2 {
    id -5        # do not change unnecessarily
    id -33 class clay        # do not change unnecessarily
    id -14 class ssd        # do not change unnecessarily
    id -17 class destroy        # do not change unnecessarily
    # weight 1.86298
    alg straw2
    hash 0    # rjenkins1
    item osd.22 weight 0.93149
    item osd.24 weight 0.93149
}
host pve3 {
    id -6        # do not change unnecessarily
    id -34 class clay        # do not change unnecessarily
    id -7 class ssd        # do not change unnecessarily
    id -18 class destroy        # do not change unnecessarily
    # weight 1.86298
    alg straw2
    hash 0    # rjenkins1
    item osd.32 weight 0.93149
    item osd.34 weight 0.93149
}
pod system {
    id -2        # do not change unnecessarily
    id -66 class clay        # do not change unnecessarily
    id -65 class ssd        # do not change unnecessarily
    id -19 class destroy        # do not change unnecessarily
    # weight 5.56714
    alg straw2
    hash 0    # rjenkins1
    item pve1 weight 1.84119
    item pve2 weight 1.86298
    item pve3 weight 1.86298
}
host hec11-pve1 {
    id -28        # do not change unnecessarily
    id -64 class clay        # do not change unnecessarily
    id -63 class ssd        # do not change unnecessarily
    id -20 class destroy        # do not change unnecessarily
    # weight 7.36475
    alg straw2
    hash 0    # rjenkins1
    item osd.13 weight 1.86298
    item osd.11 weight 1.81940
    item osd.15 weight 1.86298
    item osd.16 weight 1.81940
}
host hec11-pve2 {
    id -30        # do not change unnecessarily
    id -59 class clay        # do not change unnecessarily
    id -58 class ssd        # do not change unnecessarily
    id -21 class destroy        # do not change unnecessarily
    # weight 7.45190
    alg straw2
    hash 0    # rjenkins1
    item osd.23 weight 1.86298
    item osd.21 weight 1.86298
    item osd.25 weight 1.86298
    item osd.26 weight 1.86298
}
host hec11-pve3 {
    id -35        # do not change unnecessarily
    id -56 class clay        # do not change unnecessarily
    id -55 class ssd        # do not change unnecessarily
    id -22 class destroy        # do not change unnecessarily
    # weight 7.32117
    alg straw2
    hash 0    # rjenkins1
    item osd.31 weight 1.81940
    item osd.33 weight 1.81940
    item osd.36 weight 1.81940
    item osd.35 weight 1.86298
}
zone hot-ec11-clay566 {
    id -11        # do not change unnecessarily
    id -69 class clay        # do not change unnecessarily
    id -68 class ssd        # do not change unnecessarily
    id -23 class destroy        # do not change unnecessarily
    # weight 22.13783
    alg straw2
    hash 0    # rjenkins1
    item hec11-pve1 weight 7.36475
    item hec11-pve2 weight 7.45192
    item hec11-pve3 weight 7.32117
}
zone standard-hot {
    id -9        # do not change unnecessarily
    id -75 class clay        # do not change unnecessarily
    id -74 class ssd        # do not change unnecessarily
    id -24 class destroy        # do not change unnecessarily
    # weight 22.13788
    alg straw2
    hash 0    # rjenkins1
    item hot-ec11-clay566 weight 22.13788
}
host cec9-pve1 {
    id -77        # do not change unnecessarily
    id -87 class clay        # do not change unnecessarily
    id -86 class ssd        # do not change unnecessarily
    id -25 class destroy        # do not change unnecessarily
    # weight 9.18626
    alg straw2
    hash 0    # rjenkins1
    item osd.57 weight 2.77379
    item osd.59 weight 2.77379
    item osd.55 weight 3.63869
}
host cec9-pve2 {
    id -78        # do not change unnecessarily
    id -84 class clay        # do not change unnecessarily
    id -83 class ssd        # do not change unnecessarily
    id -26 class destroy        # do not change unnecessarily
    # weight 13.86719
    alg straw2
    hash 0    # rjenkins1
    item osd.64 weight 2.77379
    item osd.66 weight 3.69780
    item osd.67 weight 3.69780
    item osd.68 weight 3.69780
}
host cec9-pve3 {
    id -79        # do not change unnecessarily
    id -81 class clay        # do not change unnecessarily
    id -80 class ssd        # do not change unnecessarily
    id -27 class destroy        # do not change unnecessarily
    # weight 10.16939
    alg straw2
    hash 0    # rjenkins1
    item osd.76 weight 3.69780
    item osd.77 weight 3.69780
    item osd.79 weight 2.77379
}
zone cold-ec9-clay455 {
    id -44        # do not change unnecessarily
    id -50 class clay        # do not change unnecessarily
    id -47 class ssd        # do not change unnecessarily
    id -36 class destroy        # do not change unnecessarily
    # weight 33.22282
    alg straw2
    hash 0    # rjenkins1
    item cec9-pve1 weight 9.18625
    item cec9-pve2 weight 13.86719
    item cec9-pve3 weight 10.16939
}
zone archive-cold {
    id -10        # do not change unnecessarily
    id -72 class clay        # do not change unnecessarily
    id -71 class ssd        # do not change unnecessarily
    id -37 class destroy        # do not change unnecessarily
    # weight 33.22284
    alg straw2
    hash 0    # rjenkins1
    item cold-ec9-clay455 weight 33.22284
}
pod user {
    id -4        # do not change unnecessarily
    id -62 class clay        # do not change unnecessarily
    id -61 class ssd        # do not change unnecessarily
    id -38 class destroy        # do not change unnecessarily
    # weight 55.36072
    alg straw2
    hash 0    # rjenkins1
    item standard-hot weight 22.13788
    item archive-cold weight 33.22284
}
root default {
    id -1        # do not change unnecessarily
    id -43 class clay        # do not change unnecessarily
    id -16 class ssd        # do not change unnecessarily
    id -39 class destroy        # do not change unnecessarily
    # weight 60.92786
    alg straw2
    hash 0    # rjenkins1
    item system weight 5.56714
    item user weight 55.36072
}
root destroy {
    id -8        # do not change unnecessarily
    id -31 class clay        # do not change unnecessarily
    id -29 class ssd        # do not change unnecessarily
    id -12 class destroy        # do not change unnecessarily
    # weight 11.09515
    alg straw2
    hash 0    # rjenkins1
    item osd.56 weight 2.77379
    item osd.58 weight 2.77379
    item osd.65 weight 2.77379
    item osd.78 weight 2.77379
}

# rules
rule replicated_rule {
    id 0
    type replicated
    step take default
    step chooseleaf firstn 0 type host
    step emit
}
rule replicated_user_hdd {
    id 2
    type replicated
    step take cold-ec9-clay455
    step chooseleaf firstn 0 type host
    step emit
}
rule replicated_system_ssd {
    id 3
    type replicated
    step take system class ssd
    step chooseleaf firstn 0 type host
    step emit
}
rule replicated_user_ssd {
    id 4
    type replicated
    step take standard-hot class clay
    step chooseleaf firstn 0 type host
    step emit
}
rule autoscaler_ssd {
    id 6
    type replicated
    step take system class clay
    step chooseleaf firstn 0 type host
    step emit
}
rule autoscaler_clay {
    id 7
    type replicated
    step take user class clay
    step chooseleaf firstn 0 type host
    step emit
}
rule cephfs-userdata-tier_hec11-clay566-2025-0310 {
    id 8
    type erasure
    step set_chooseleaf_tries 5
    step set_choose_tries 100
    step take hot-ec11-clay566 class clay
    step choose indep 0 type osd
    step emit
}
rule cephfs-userdata-tier_cec9-clay455-2025-0310 {
    id 9
    type erasure
    step set_chooseleaf_tries 5
    step set_choose_tries 100
    step take cold-ec9-clay455 class clay
    step choose indep 0 type osd
    step emit
}

# end crush map
 
When multiple OSDs have slow IOPS at the same time, it might be a network/connection-issue.
What does journalctl and ceph-log say?
 
  • Like
Reactions: gurubert
When multiple OSDs have slow IOPS at the same time, it might be a network/connection-issue.
What does journalctl and ceph-log say?
Code:
root@pve2:~[130]#journalctl -efu ceph-osd@67
---
94304, omap_header_size: 0, omap_entries_size: 0, attrset_size: 2, recovery_info: ObjectRecoveryInfo(131:5802fcc2:::rbd_data.ef35d45d647594.000000000002d954:head@116135'10061422, size: 4194304, copy_subset: [0~4194304], clone_subset: {}, snapset: 0=[]:{}, object_exist: 0), after_progress: ObjectRecoveryProgress(!first, data_recovered_to:4194304, data_complete:true, omap_recovered_to:, omap_complete:true, error:false), before_progress: ObjectRecoveryProgress(first, data_recovered_to:0, data_complete:false, omap_recovered_to:, omap_complete:false, error:false))])
Mar 11 19:26:57 pve2 ceph-osd[70516]: 2025-03-11T19:26:57.651+0900 74dca5c006c0 -1 osd.67 118158 get_health_metrics reporting 1 slow ops, oldest is MOSDPGPushReply(131.f 118154/118139 [PushReplyOp(131:f024a0c1:::rbd_data.ef35d45d647594.0000000000041d68:head)])
---

thankyou!
 
Code:
root@pve2:~[130]#journalctl -efu ceph-osd@67
---
94304, omap_header_size: 0, omap_entries_size: 0, attrset_size: 2, recovery_info: ObjectRecoveryInfo(131:5802fcc2:::rbd_data.ef35d45d647594.000000000002d954:head@116135'10061422, size: 4194304, copy_subset: [0~4194304], clone_subset: {}, snapset: 0=[]:{}, object_exist: 0), after_progress: ObjectRecoveryProgress(!first, data_recovered_to:4194304, data_complete:true, omap_recovered_to:, omap_complete:true, error:false), before_progress: ObjectRecoveryProgress(first, data_recovered_to:0, data_complete:false, omap_recovered_to:, omap_complete:false, error:false))])
Mar 11 19:26:57 pve2 ceph-osd[70516]: 2025-03-11T19:26:57.651+0900 74dca5c006c0 -1 osd.67 118158 get_health_metrics reporting 1 slow ops, oldest is MOSDPGPushReply(131.f 118154/118139 [PushReplyOp(131:f024a0c1:::rbd_data.ef35d45d647594.0000000000041d68:head)])
---

thankyou!

Code:
root@pve2:~#ping -W 1 -i 0.25 192.168.10.21

PING 192.168.10.21 (192.168.10.21) 56(84) bytes of data.
64 bytes from 192.168.10.21: icmp_seq=1 ttl=64 time=0.107 ms
64 bytes from 192.168.10.21: icmp_seq=2 ttl=64 time=0.080 ms
64 bytes from 192.168.10.21: icmp_seq=3 ttl=64 time=0.290 ms
---
64 bytes from 192.168.10.21: icmp_seq=85 ttl=64 time=0.202 ms
^C
--- 192.168.10.21 ping statistics ---
85 packets transmitted, 85 received, 0% packet loss, time 21492ms
rtt min/avg/max/mdev = 0.059/0.204/0.876/0.191 ms
 
Bash:
root@pve2:~#ip -s link                                                                           204.57s 19:41:27
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    RX:    bytes  packets errors dropped  missed   mcast
    282648728706 66417647      0       0       0       0
    TX:    bytes  packets errors dropped carrier collsns
    282648728706 66417647      0       0       0       0
2: enp7s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 08:bf:b8:83:ae:49 brd ff:ff:ff:ff:ff:ff
    RX:    bytes   packets errors dropped  missed   mcast
      8135889389  23528570      0       0       0  413965
    TX:    bytes   packets errors dropped carrier collsns
    148604521313 151718608      0       0       0       0
3: enx222c7a4dd031: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 22:2c:7a:4d:d0:31 brd ff:ff:ff:ff:ff:ff
    RX:  bytes packets errors dropped  missed   mcast
             0       0      0       0       0       0
    TX:  bytes packets errors dropped carrier collsns
             0       0      0       0       0       0
4: enxa0cec8fa9576: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 08:bf:b8:83:ae:49 brd ff:ff:ff:ff:ff:ff permaddr a0:ce:c8:fa:95:76
    RX:  bytes packets errors dropped  missed   mcast
      39404539  569098      0       0       0       0
    TX:  bytes packets errors dropped carrier collsns
         83540     349      0       0       0       0
5: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond20 state UP mode DEFAULT group default qlen 1000
    link/ether 24:8a:07:bb:13:57 brd ff:ff:ff:ff:ff:ff
    RX:     bytes   packets errors dropped  missed   mcast
     713147314755 280607501      0       0       0   13601
    TX:     bytes   packets errors dropped carrier collsns
    1343156775300 399650385      0       0       0       0
6: enp1s0d1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond20 state UP mode DEFAULT group default qlen 1000
    link/ether 24:8a:07:bb:13:57 brd ff:ff:ff:ff:ff:ff permaddr 24:8a:07:bb:13:58
    RX:     bytes   packets errors dropped  missed   mcast
    1017437334967 316770100      0       0       0    6088
    TX:     bytes   packets errors dropped carrier collsns
     885714029503 278219120      0       0       0       0
7: tailscale0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 500
    link/none
    RX:  bytes packets errors dropped  missed   mcast
      20164423  195373      0       0       0       0
    TX:  bytes packets errors dropped carrier collsns
      19364401  183677      0       0       0       0
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 08:bf:b8:83:ae:49 brd ff:ff:ff:ff:ff:ff
    RX:    bytes   packets errors dropped  missed   mcast
      8175293928  24097668      0      18       0  413965
    TX:    bytes   packets errors dropped carrier collsns
    148604604853 151718957      0       0       0       0
9: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 08:bf:b8:83:ae:49 brd ff:ff:ff:ff:ff:ff
    RX:  bytes packets errors dropped  missed   mcast
     388067838 3055372      0   47032       0  182860
    TX:  bytes packets errors dropped carrier collsns
     880491711 2441081      0       0       0       0
10: bond20: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue master vmbr20 state UP mode DEFAULT group default qlen 1000
    link/ether 24:8a:07:bb:13:57 brd ff:ff:ff:ff:ff:ff
    RX:     bytes   packets errors dropped  missed   mcast
    1730584649722 597377601      0      18       0   19694
    TX:     bytes   packets errors dropped carrier collsns
    2228870804803 677869505      0      33       0       0
 
Am I right, that you are using Erasure Coding?
 
yes you are right.
I've been using erasure code for a very 9months, but I've never seen it this slow.

Bash:
root@pve3:~#ceph fs status cephfs-userdata-tier
cephfs-userdata-tier - 6 clients
====================
RANK  STATE    MDS       ACTIVITY     DNS    INOS   DIRS   CAPS
 0    active  pve1-1  Reqs:    0 /s  56.7k  50.7k  2492     40
                    POOL                        TYPE     USED  AVAIL
       cephfs-userdata-tier_metadata          metadata   953M  7077G
         cephfs-userdata-tier_data              data       0   7077G
cephfs-userdata-tier_hec11-clay566-2025-0310    data    41.9G  9650G
cephfs-userdata-tier_cec9-clay455-2025-0310     data     124G  8234G
STANDBY MDS
   pve3-1
   pve2-1
   pve2-2
MDS version: ceph version 18.2.4 (2064df84afc61c7e63928121bfdd74c59453c893) reef (stable)
 
Last edited:
I have 2 erasure code pool
one is sata ssd pool, the other one is sata hdd pool
Only the HDD pool is slow.

While the slow ops warning is displayed, the recovery rate is shown as 0 B/s.
When the recovery rate reaches 300 KiB/s(not 0), the slow ops warning disappears completely.

It seems that OSD writes are occurring at certain intervals.
but Apply/Commit Latency still smaller than 50
 
Maybe have a look at the SMART-Values for the Disks/OSDs which report slow IOPS.