CEPH Cluster

Norberto Iannicelli

Well-Known Member
May 9, 2016
54
0
46
38
Good morning everyone. I have a doubt and maybe someone can help me.
I have a disk array that I use in OVH in ceph technology, connected to my proxmox cluster.

My ceph pool has 6TB, but my proxmox shows only 4TB. Does anyone know why proxmox doesn't get the actual size of the pool?
Is it a bug? do I need a newer kernel?

Thank you in advance.

Node information:

Code:
proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve)
pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
pve-kernel-4.15: 5.4-9
pve-kernel-4.15.18-21-pve: 4.15.18-47
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-17-pve: 4.15.18-43
pve-kernel-4.15.18-12-pve: 4.15.18-36
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-54
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-7
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-40
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-54
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
 
from cli check this
Code:
ceph df

I am not a ceph expert - so keep that in mind.

if you have 6TB total disk physical storage , 6TB will not be what is available in ceph. try using this http://florian.ca/ceph-calculator/ to see what I mean.

the amount of space available depends on the 'size' of the pool. by size i mean that shows at pve panel > ceph > Pools
'Size/min' . which is the number of copies of data .



also the storage graph at pve panel for ceph seems like it is a little inaccurate.
 
Where in Proxmox are you seeing only 4TB?

As above if you can attach the output of 'ceph df'
 
I don't have access to the ceph pool, but the administrator sent me the command.

Code:
mon-01-4f5c54ea-4b67-4214-98ad-ba58fefb753f:~ # ceph df
GLOBAL:
    SIZE       AVAIL     RAW USED     %RAW USED
    16462G     5998G       10463G         63.56
POOLS:
    NAME     ID     USED      %USED     MAX AVAIL     OBJECTS
    rbd      0      3737G     90.45          394G      963405

My atual usage:

82.39% (3.84 TiB of 4.66 TiB)
 
Your RBD pool is set to 4000G, which is what Proxmox will report.

Your CEPH cluster has 6000G, but the RBD pool only has 4000G assigned to it.
 
The ceph administrator who will have to allocate 6GB to the pool, right? I use ovh disk array.

Thank you very much for the replys.
 
The ceph administrator who will have to allocate 6GB to the pool, right? I use ovh disk array.

Thank you very much for the replys.

Correct, however CEPH you should never run at full capacity as a single drive failure can make the system full and stop operating.

So the fact your nearly using 4TB out of 6TB I would say your getting close to the point of having to upgrade. This may be why they set the pool to only 4 out of the 6 to protect against a full OSD.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!