Ceph loses total space, when close to full.

Dec 26, 2018
138
3
23
36
Lost almost 100GB from startup, to 88% full.
We are currently running 3 nodes, with 2 500GB SSD's in each node, with replication 3, and minimum 2. (we are planning to add new nodes in a few months.)
Is this normal? of so why?

Thanks :)

v5KBlEw.png
 
The disk with the highest fill level will dominate how much data can be written to the pool. As with the default replication factor, 3x copies are made. The pool will only show the data that can be written, before duplication.