Hello,
We now have a well working Ceph cluster with 5 nodes (PVE5.4, Ceph luminous).
OSDs are distributed like this :
* host1 :
osd.12 : 1750 GB
osd.13 : 1750 GB
* host2 :
osd.0 : 894 GB
osd.1 : 1750 GB
* host3 :
osd.5 : 1750 GB
* host4 :
osd.4 : 1750 GB
osd.5 : 894 GB
* host 5 :
osd.14 : 1750 GB
osd.6 : 1750 GB
So, the raw storage would be 14038 GB. With a size=2 on the pool, I expected a total 'usable' storage of around 7000 GB.
Actually, the ceph df is reporting things like this :
So that makes 'only' 6280 GB. Of course, I'll keep enough space for data rebalancing in case of a node failure (as always, NEVER go to 100% disk usage in a Ceph cluster).
Does someone have a clue about the missing 800GB of RAW storage ?
Could it be because host3 has only one OSD and lower total storage ?
I'm using http://florian.ca/ceph-calculator/ to calculate the 'safe' cluster size and it reports me 7019 GB of 'risky' cluster size (basically RAW/2).
Thanks for your help,
Julien
We now have a well working Ceph cluster with 5 nodes (PVE5.4, Ceph luminous).
OSDs are distributed like this :
* host1 :
osd.12 : 1750 GB
osd.13 : 1750 GB
* host2 :
osd.0 : 894 GB
osd.1 : 1750 GB
* host3 :
osd.5 : 1750 GB
* host4 :
osd.4 : 1750 GB
osd.5 : 894 GB
* host 5 :
osd.14 : 1750 GB
osd.6 : 1750 GB
So, the raw storage would be 14038 GB. With a size=2 on the pool, I expected a total 'usable' storage of around 7000 GB.
Actually, the ceph df is reporting things like this :
Code:
NAME ID USED %USED MAX AVAIL OBJECTS
ceph-ssd-fast 1 4.46TiB 71.01 1.82TiB 1169441
Does someone have a clue about the missing 800GB of RAW storage ?
Could it be because host3 has only one OSD and lower total storage ?
I'm using http://florian.ca/ceph-calculator/ to calculate the 'safe' cluster size and it reports me 7019 GB of 'risky' cluster size (basically RAW/2).
Thanks for your help,
Julien
Last edited: