Ceph loses total space, when close to full.

Dec 26, 2018
Lost almost 100GB from startup, to 88% full.
We are currently running 3 nodes, with 2 500GB SSD's in each node, with replication 3, and minimum 2. (we are planning to add new nodes in a few months.)
Is this normal? of so why?

Thanks :)

The disk with the highest fill level will dominate how much data can be written to the pool. As with the default replication factor, 3x copies are made. The pool will only show the data that can be written, before duplication.


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!