[SOLVED] PGs not being deep scrubbed in time; After replacing disks

BunkerHosted

New Member
Aug 10, 2020
13
2
3
28
This week we have been balancing storage across our 5 node cluster; Everything is going relatively smoothly but am getting a warning in CEPH:
"pgs not being deep-scrubbed in time"

This only began happening AFTER we made changes to the disks on one of our nodes; CEPH is still healing properly, but the number of PGs that missed the deep scrub interval continues to grow. From 11 PGs missing the windows yesterday, to 32 PGs missing the window when I arrived here this morning.

Should I just wait for ceph to become fully recovered and these warnings will correct themselves? Or should I be forcing deep-scrubs on all PGs?

Any insight would be greatly appreciated :)
 
The default for osd_scrub_during_recovery is "false", so it follows that your cluster may miss the target(s) while recovering. Assuming you also have default settings for other "scrub" related settings and a reasonable number of PGs, the cluster should be able to catch up after recovery.
 
  • Like
Reactions: BunkerHosted
The default for osd_scrub_during_recovery is "false", so it follows that your cluster may miss the target(s) while recovering. Assuming you also have default settings for other "scrub" related settings and a reasonable number of PGs, the cluster should be able to catch up after recovery.
This makes perfect sense thanks a lot for this! Just for confirmation, I don't see this osd_scrub_during_recovery within my ceph.conf

Is there another place where I may be missing this configuration? Regardless, I will mark this thread as solved.
Thanks again!
 
Guyz im new to Proxmox and i am getting - 4 pgs not deep-scrubbed in time - alarm in my Server health,

kindly help