OK, got help from the ceph mailing list. It was not the autoscale that was the problem. I had created a crush rule that broke the hole thing, after restoring the default crush rule on the pools the autoscale worked just fine, and now everything is nice and green exept for one thing:
I get this...
Have tried disabling it, no luck.
But the strange ting is that when I use the get pg_num command it answers back in the old pg_num and not the new one set by autoscaling...
I've was maybe a little quik enabling autoscale on my cluster. I've just started using ceph and after autoscaling I now have 256 pgs in unknown state since the autoscale reduced the pgs significantly. How can I tell the cluster that the pg count is down?
root@ce01:~# ceph -s
cluster:
id...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.