Full Ceph pool, but plenty of space on cluster?

drholr

Active Member
Jan 9, 2018
3
0
41
54
Hello,
I am seeking clarification on a few of the terms seen on the dashboard in Proxmox (5.4) whilst using Ceph.

On the "Ceph" window, I see the usage indicator, in this case indicating 64% usage (32.66TB of 50.87TB used).

Screen Shot 2019-11-14 at 11.33.07 PM.png

Then, when I view the storage pool by clicking on a node, then on the shared storage (ceph_vm in this case) I see a very full looking pool, 92.72% (15.44TB of 16.65TB). This pool has a size of 2, min of 1.
Screen Shot 2019-11-14 at 11.33.38 PM.png

Screen Shot 2019-11-14 at 11.32.57 PM.png

I don't remember setting an upper limit of 16.65TB to the pool, and I would like to provide ceph_vm more of the cluster's storage (given that it is only 64% full). How may I go about this please?

Thank you!
 
16 * 2 = 32 TiB. It is a terrible idea to run 2/1 replica on a small cluster, the chance to loose data due to a subsequent failure is high. Run with 3/2, which means, increase the storage space or delete data.

When you run a ceph df tree you should see the data distribution. One ore more OSDs should have more data stored. This will decrease the actual available space. This is different from the global raw space.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!