For those who may come after me... I had the same problem and was able to fix it as follows. First run the command:
ceph osd pool autoscale-status
It gave:
POOL|SIZE|TARGET SIZE|RATE|RAW CAPACITY|RATIO|TARGET RATIO|EFFECTIVE RATIO|BIAS|PG_NUM|NEW PG_NUM|AUTOSCALE|PROFILE
device_health_metrics|36087k|blank |3.0|59616G|0.0000|blank |blank |1.0|1|blank|on|scale-up
CephPool|713.5G|20000G|3.0|59616G|1.0064|blank|blank|1.0| 32|blank|on|scale-up
I noticed that if you take 59616, the raw capacity, and divide it by three (that's the number of replicas I have and the default value if I remember right), that's how much space you really have avaible. Divide that into the target size of 20,000G and you get the ratio of 1.0064. Hence the target size needs to be reduced. How much? 20000G/1.0064 should do the trick. I calculated mine based on what I know the raw disk space really is and came up with 19868G for target size. Changing that in the WebGUI->node->Ceph->pool, select pool name and edit, set target size to lower value calculated for your case and save. Be patient, it takes a few minutes for the system to catch up and remove the warning. For me it took about 5 minutes.
A second look at the status produced:
POOL|SIZE|TARGET SIZE|RATE|RAW CAPACITY|RATIO|TARGET RATIO|EFFECTIVE RATIO|BIAS|PG_NUM|NEW PG_NUM|AUTOSCALE|PROFILE
...
CephPool|713.5G|19868G|3.0||59616G|0.9998|1.0|32|256|on|scale-up
and the warning is gone!