Hello,
i'm testing a Ceph cluster and i notice low performance when it enter in
"active+clean+scrubbing+deep" state
Reading this, may be a solution:
--------------------------------------------------------------------------------------------
http://sudomakeinstall.com/linux-systems/ceph-scrubbing
Inject the new settings for the existing OSD:
ceph tell osd.* injectargs '--osd_disk_thread_ioprio_priority 7'
ceph tell osd.* injectargs '--osd_disk_thread_ioprio_class idle'
Edit your ceph.conf on your storage nodes to automatically set the the priority at runtime.
#Reduce impact of scrub.
osd_disk_thread_ioprio_class = "idle"
osd_disk_thread_ioprio_priority = 7
--------------------------------------------------------------------------------------------
Some one have the same problem ?
Thanks
i'm testing a Ceph cluster and i notice low performance when it enter in
"active+clean+scrubbing+deep" state
Reading this, may be a solution:
--------------------------------------------------------------------------------------------
http://sudomakeinstall.com/linux-systems/ceph-scrubbing
Inject the new settings for the existing OSD:
ceph tell osd.* injectargs '--osd_disk_thread_ioprio_priority 7'
ceph tell osd.* injectargs '--osd_disk_thread_ioprio_class idle'
Edit your ceph.conf on your storage nodes to automatically set the the priority at runtime.
#Reduce impact of scrub.
osd_disk_thread_ioprio_class = "idle"
osd_disk_thread_ioprio_priority = 7
--------------------------------------------------------------------------------------------
Some one have the same problem ?
Thanks