Please re-direct me if there's a better place to post this.
I'm running PVE 6.0 and Ceph Nautilus. With reference to the attached, it appears that the RATIO in the output of this command is double multiplied:
ceph osd pool autoscale-status
Apologies for the scribble in the pic but in the pic attached we see "ceph df detail" output showing 1.1TiB STORED. This is a new pool with one vm disk on it and that figure is correct. In the USED column we see 3.3TiB. The pool is created with the standard 3/2 replication so the 3.3TiB is also correct.
Then moving up in the pic to the output of "ceph osd pool autoscale-status" we see the SIZE column showing 3362G (about 3.3TiB), also correct but perhaps it should show 1121G (about 1.1TiB) instead? Why? Because the rule of the RATIO calculation is to use the SIZE and multiply by the RATE, which is taken from the pool replication rate of 3 in the 3/2 replication setting, and then divide by the RAW CAPACITY to calculate the RATIO. SIZE x RATE / RAW CAPACITY = RATIO.
The problem appears in the RATIO column. The true ratio is closer to 15% or 0.15 rather than the 44% or 0.4371 shown.
Is this just a problem in the way I may have setup the storage and the pools? I did not manipulate the ratio.
Should I just set the RATE to 1 to get the correct RATIO? (comments on this question are MOST welcome)
Or is it a bug?
Need more detail? Please let me know but one thing I can say is that the servers are running fully updated as of today in case there was a software version question.
I'm running PVE 6.0 and Ceph Nautilus. With reference to the attached, it appears that the RATIO in the output of this command is double multiplied:
ceph osd pool autoscale-status
Apologies for the scribble in the pic but in the pic attached we see "ceph df detail" output showing 1.1TiB STORED. This is a new pool with one vm disk on it and that figure is correct. In the USED column we see 3.3TiB. The pool is created with the standard 3/2 replication so the 3.3TiB is also correct.
Then moving up in the pic to the output of "ceph osd pool autoscale-status" we see the SIZE column showing 3362G (about 3.3TiB), also correct but perhaps it should show 1121G (about 1.1TiB) instead? Why? Because the rule of the RATIO calculation is to use the SIZE and multiply by the RATE, which is taken from the pool replication rate of 3 in the 3/2 replication setting, and then divide by the RAW CAPACITY to calculate the RATIO. SIZE x RATE / RAW CAPACITY = RATIO.
The problem appears in the RATIO column. The true ratio is closer to 15% or 0.15 rather than the 44% or 0.4371 shown.
Is this just a problem in the way I may have setup the storage and the pools? I did not manipulate the ratio.
Should I just set the RATE to 1 to get the correct RATIO? (comments on this question are MOST welcome)
Or is it a bug?
Need more detail? Please let me know but one thing I can say is that the servers are running fully updated as of today in case there was a software version question.