I have to say, that I also found this documentation and I did set bluestore_slow_ops_warn_threshold per problematic OSD and warning is gone !
So it is really seems to be a feature ....
Anyone tried this ?
https://github.com/rook/rook/discussions/15403
Specially this two:
ceph config set global bdev_async_discard_threads 1
ceph config set global bdev_enable_discard true
I am not able to do snapshots on RBD storage, another bug:
https://tracker.ceph.com/issues/61582?next_issue_id=61581
I have had using Ceph long time, but, this going to be worse and worse ...
I have upgraded too to 19.2.2 before weekend, but no luck:
HEALTH_WARN: 2 OSD(s) experiencing slow operations in BlueStore
osd.9 observed slow operation indications in BlueStore
osd.15 observed slow operation indications in BlueStore
Hi all, I am just observing and witing for a real patch, I have did not "repair" CEPH health anyhow, but ....
All warnings did disappeared a 3-4 days ago and CEPH health is green ...
So I am confused ... :-)
Uff, so It was good idea NOT to recreate storage ....
I am watching this:
https://tracker.ceph.com/issues/61582?next_issue_id=61581
And I seems that nobody cares ...
I suppose that combination LXC + CEPH has no many "users" .....
I also had big troubles with LXC and nfs mount under...
Hi, I have exact the same problem. After 3 months I do not see any progress here: https://tracker.ceph.com/issues/61582?next_issue_id=61581
So I suppose this issue is not solved, Am I right ?
Thanks for the reply. :)
PS: I have empty cephfs on my pool. but I am scared to delete it.