Ceph HDD OSDs start at ~10% usage, 10% lower size

SteveITS

Active Member
Feb 6, 2025
118
30
28
I've noticed when creating an OSD on an HDD, it starts out showing about 10% usage, and stays about 10% higher? We are using an SSD for the DB and therefore WAL but it seems like it shouldn't count that...? Also, the size is shown lower in the OSD "details" than the OSD page in PVE.

Screen cap of osd.7 details:
1744300217829.png
in PVE:
1744300320835.png

All the SSD only OSDs start at 0%. In this cluster all the SSDs are ~13% used and the few HDDs are all ~23% used.

(and yes we're aware of the bug in 19.2.0, and are recreating them all)

Thanks.
 
Example, a newly created OSD (~1 second):
1744311294201.png

and after about 5 seconds:
1744311273129.png

This one had a 99 GB DB size on the SSD which (I suspect) is why it's a bit lower than 10%.
 
This is normal. Ceph adds the size of the RocksDB device to the total size of the OSD. But since the RocksDB device cannot store any data it is computed as completely used. This is why you see a 10% usage on your fresh OSD.

BTW: 80GB RocksDB device seems a little bit large for a 750GB HDD. And where did you get such small HDDs from?
 
Last edited:
  • Like
Reactions: UdoB
Thanks for the reply and info. OK, and it just doesn’t show up on the SSDs because they don’t have a separate DB/WAL. Seems inconsistent but I’ll roll with it. I assumed it was “normal” just didn’t understand why.

These are a few older SAS HDDs we’re carrying forward to Proxmox. They’re 900 GB and I think only one 1.2 TB without looking.

The 10%/80 GB is Proxmox. If you specify the DB drive that’s what it uses by default. I had seen varying recommendations, 4% or 10%, but since it’s generally one small SSD each it doesn’t really matter and I didn’t bother adjusting.
 
Last edited: