Hi,
After restarting one of our nodes, the ceph storage of our PVE entered a recovery state, however, this recovery is very slow.
Already configured the following parameters, however, had no effect.
osd-recovery-max-single-start 4
osd-max-backfills 4
osd_recovery_max_active 4
The speed is at 20 MB/s, while it could be above 100 MB/s.
Below is the status of the cluster:
# ceph -s
cluster:
id: 24b360af-3026-44f7-a06f-62829c4baa8b
health: HEALTH_WARN
3 daemons have recently crashed
services:
mon: 4 daemons, quorum ctzpve2,ctzpve3,ctzpve4,ctzpve5 (age 3h)
mgr: ctzpve2(active, since 10w), standbys: ctzpve5, ctzpve4, ctzpve3
osd: 5 osds: 5 up (since 3h), 5 in (since 10w); 23 remapped pgs
date:
pools: 2 pools, 129 pgs
objects: 146.31k objects, 567 GiB
usage: 1.7 TiB used, 1.6 TiB / 3.3 TiB avail
pgs: 24436/438927 objects misplaced (5.567%)
106 active+clean
23 active+remapped+backfilling
yo:
client: 14 KiB/s rd, 326 KiB/s wr, 0 op/s rd, 55 op/s wr
recovery: 15 MiB/s, 4 objects/s
Could you help me solve the problem.
Thanks.
Danilo
After restarting one of our nodes, the ceph storage of our PVE entered a recovery state, however, this recovery is very slow.
Already configured the following parameters, however, had no effect.
osd-recovery-max-single-start 4
osd-max-backfills 4
osd_recovery_max_active 4
The speed is at 20 MB/s, while it could be above 100 MB/s.
Below is the status of the cluster:
# ceph -s
cluster:
id: 24b360af-3026-44f7-a06f-62829c4baa8b
health: HEALTH_WARN
3 daemons have recently crashed
services:
mon: 4 daemons, quorum ctzpve2,ctzpve3,ctzpve4,ctzpve5 (age 3h)
mgr: ctzpve2(active, since 10w), standbys: ctzpve5, ctzpve4, ctzpve3
osd: 5 osds: 5 up (since 3h), 5 in (since 10w); 23 remapped pgs
date:
pools: 2 pools, 129 pgs
objects: 146.31k objects, 567 GiB
usage: 1.7 TiB used, 1.6 TiB / 3.3 TiB avail
pgs: 24436/438927 objects misplaced (5.567%)
106 active+clean
23 active+remapped+backfilling
yo:
client: 14 KiB/s rd, 326 KiB/s wr, 0 op/s rd, 55 op/s wr
recovery: 15 MiB/s, 4 objects/s
Could you help me solve the problem.
Thanks.
Danilo