CephFS disk usage for ISO storage?

Sep 7, 2025
7
0
1
We are just finishing up our in-place upgrade from vSphere to Proxmox. The usual story: the licensing from Broadcom was crazy, so they got the finger. So a really hairbrained scheme to do it in-place, however, due to lots of planning, it went pretty smoothly. The only real hiccup was that one of the caddies used to hold the SSDs being moved from the storage array to be up the front of the servers got broken.

Anyway, one of the last jobs is to sort out ISO storage. The boot drives (think Lenovo equivalent of the Dell BOSS) are only 240GB, so not really enough for ISO storage. We have 140GB of ISO's that were in a folder on a VMFS file system. I created a Ceph file system on the Ceph storage and copied in the ISO's. I then noticed the Ceph file system is 1.4TB in size, which is ridiculous for ISO storage and is gobbling up precious SSD storage for the VMs. However, the usage on the Ceph dashboard only went up by the 140 GB of ISO's I copied in.

Is there a way to control the size of a CephFS created on top of the Ceph RDB, or is the spare space in the CephFS still available for VMs? I am not really a fan of over-committing storage because it can go horribly wrong if you use it all up, so I would prefer to limit the size of the CephFS to something more suitable and commit the storage if that is possible.
 
What’s your replication for Ceph?

Cephfs just uses the same pool. You could create a new pool/different crush map I suppose. But then it would fill up…? Storage doesn’t multiply like that, I’d guess there are multiple copies or similar in it.
 
The replicas is set at three, which should presumably give you redundancy in maintenance; aka you have a machine rebooting for updates and one of your other machines decides to die just at that point, but because you have three replicas the whole thing doesnt collapse in a heap (you obviosuly loose the VMs on the machine that died till the HA kicks in and starts them up again).

I guess my primary worry is that I have lost 1.4TB of Ceph space for the storage of ISOs, which, given how much enterprise SSDs cost, would be a bitter pill to swallow. However, I think I am only losing the space the ISOs take up, which is acceptable, but being a novice to Proxmox and Ceph, I am not sure if that is correct or if I am on the crack pipe.

To my mind, what would be neat for ISO storage would be to stick a large USB key in each server and have it replicated between them. They are affordable; a 512GB USB 3 key from a reputable manufacturer and source costs $30 and offers 200MB/s read and 100MB/s write speeds. There is no way you are going to wear out a USB key storing ISO images, and most servers still come with an internal USB port.
 
The Cephfs storage should not use 10x what you're storing in it. I would look at it on disk and see what is actually being used.

host:/mnt/pve/cephfs# du -h
0 ./migrations
0 ./dump
8.2G ./template/iso
0 ./template/cache
8.2G ./template
8.2G .


GUI when clicking on the storage icon in the tree:
1758298390568.png

GUI for pool shows 24 GB, 3x:
1758298534151.png
 
  • Like
Reactions: gurubert
The point is that CephFS is 10x bigger than what I am storing in it, which is entirely unnecessary. So, the Proxmox servers display this in the summary for the ISO CephFS file system.

10.88% (147.36 GB of 1.36 TB)

So, is that 1.36TB of space on Ceph that is lost to the ISO CephFS file system? Or have I lost 147GB, and if I add more ISOs, will I lose more Ceph space, up to a maximum of 1.36TB?

My issue is that it is not clear what is going on. If I have lost 1.36TB of space permanently to the ISO CephFS that is now no longer available to store VMs, that would suck. If I have just lost 147GB, that would be less of a problem, other than I am now thin-provisioned, and it could all end in tears if I am not careful. IMHO, people who thin provision storage on the production system are fools of a Took. I have first-hand experience of what happens when you run out of space on a thin provisioned system, and it is not pretty. I counselled against the idea of thin provisioning, I hasten to add.
 
Code:
root@virtual1:~# ceph df detail
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
ssd    8.7 TiB  4.2 TiB  4.5 TiB   4.5 TiB      51.50
TOTAL  8.7 TiB  4.2 TiB  4.5 TiB   4.5 TiB      51.50
 
--- POOLS ---
POOL          ID  PGS   STORED   (DATA)   (OMAP)  OBJECTS     USED   (DATA)   (OMAP)  %USED  MAX AVAIL  QUOTA OBJECTS  QUOTA BYTES  DIRTY  USED COMPR  UNDER COMPR
.mgr           1    1  4.1 MiB  4.1 MiB      0 B        2   12 MiB   12 MiB      0 B      0    1.1 TiB            N/A          N/A    N/A         0 B          0 B
datastore      2  512  1.4 TiB  1.4 TiB  154 KiB  360.67k  4.1 TiB  4.1 TiB  463 KiB  55.25    1.1 TiB            N/A          N/A    N/A         0 B          0 B
iso_data       3   32  137 GiB  137 GiB      0 B   35.15k  412 GiB  412 GiB      0 B  10.88    1.1 TiB            N/A          N/A    N/A         0 B          0 B
iso_metadata   4   32  1.7 MiB  1.6 MiB   11 KiB       23  5.0 MiB  5.0 MiB   34 KiB      0    1.1 TiB            N/A          N/A    N/A         0 B          0 B

Hopefully, we can refresh the hardware in a year, at which point I will increase the storage and make this all go away. Fecking Broadcom.
 
Hi,
the ceph is always thin provisioned.
But you can use quotas and different pools combined with Monitoring.

For a cephFS set the quota in the rbd datapool, that will get created underneath.
For setting quotas, you will need to utilize the cli
Code:
ceph osd pool set-quota <pool-name> [max_objects <obj-count>] [max_bytes <bytes>]

BR, Lucas