Recommended method for secondary Ceph Pool

gdi2k

Renowned Member
Aug 13, 2016
83
1
73
We have a 3-server PVE cluster using Ceph running on SSDs.

Now we would like to add a second, separate Ceph pool to the same cluster using slow HDDs (only for CCTV DVR duties).

What is the recommended procedure for configuring that these days? I've seen these approaches:

https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
https://elkano.org/blog/ceph-sata-ssd-pools-server-editing-crushmap/
https://forum.proxmox.com/threads/creating-using-multiple-ceph-pools.30181/

(not sure if that last one was successful or not)

Ceph version is Jewel on PVE 4.4-15

Thanks!
 
Hi,

Proxmox VE so not support the creation of a secondary physical pool in ceph.
For this you have to edit the crushmap and create a second rule set, if this is done you can apply a logical pool to the rule.
 
Has anything changed in this regard? Can it be done from GUI now with more recent PVE versions, or do I still need to manually edit the crushmap and create a second rule set?
 
Update: I was able to get this working using Ceph's new device class feature:
https://ceph.com/community/new-luminous-crush-device-classes/

I added the HDDs as OSDs first via GUI.

Then I I created one fast crush rule (using NVMEs) and one slow crush rule (using HDDs) using the CLI;
ceph osd crush rule create-replicated ceph-fast default host nvme
ceph osd crush rule create-replicated ceph-slow default host hdd

Then I created two pools, one for fast, one for slow. After migrating all the disks to the new pools, I deleted the old pool with the Proxmox default ruleset (replicated_rule).

Working well so far, although the Ceph overview page shows total usage for all OSDs / pools combined, not by pool, which would be more useful.
 
Hi gdi2k,

Currently Iam in a process of creating 2 different pools with SSD and SATA. In the CLI it shows different weights, but in proxmox console, the weight shows as a consolidate number. For example, my SSD weight is about 2.7 TB of pool size and SATA weight is about 5.4 TB pool size. And in proxmox console shows the size as 8.1 TB. Is this by default. Can we configure it to show pool size depending upon the size. ?

Thanks in advance.
 
Please check out the docs. The overview page shows the cluster totals, while each storage shows the pool usage values.
https://pve.proxmox.com/pve-docs/chapter-pveceph.html
Hi Alwin,

Thanks for your prompt reply. Please be noted Iam using old proxmox v3.4 and GIANT CEPH. So to your reply, you are confirming that if we create two different pools with SSD and SATA by editing the CRUSH map, it still will show the whole storage space in the cluster. So we cannot configure in the proxmox console to show SSD pool with its respective size and SATA pool with its respective size.

Regards,
Pradeep. S
 
PVE 3 and Ceph Giant are both EoL and miss a lot of features. I strongly advise to upgrade!

Thanks for the link.

But the link doesn't complete my query. I just want to know whether in proxmox will pool size been shown individually or the entire cluster storage size will be shown.

As mentioned in my test case, I have two different pools SSD and SATA namely. SSD has a weightage of 2.7 TB and SATA has a weightage of 5.4 TB. But in proxmox UI, instead of pool size showing as 2.7 it shows as 8.1 TB (i.e the entire size of the cluster).

Is this by default, will proxmox v3.2, doesn't segregate the pool size and give it on UI. Or by default it shows the entire cluster storage size for all the pools ?

Thanks in advance.
 
There is no distinguishing between pool or cluster size in old PVE 3/4, only the cluster size is shown.

Note:
In Ceph, weight defines a preference to how likely crush places a PG on an OSD. The size of an OSD is given by the possible disk space of the underlying media. The weight just corresponds at its default setting with the OSD size, but is free adjustable. So, 'weightage of 2.7TB' doesn't exist as a term in ceph.
 
There is no distinguishing between pool or cluster size in old PVE 3/4, only the cluster size is shown.

Note:
In Ceph, weight defines a preference to how likely crush places a PG on an OSD. The size of an OSD is given by the possible disk space of the underlying media. The weight just corresponds at its default setting with the OSD size, but is free adjustable. So, 'weightage of 2.7TB' doesn't exist as a term in ceph.
Thanks for your update. You were being very helpful. :)
 
PVE 3 and Ceph Giant are both EoL and miss a lot of features. I strongly advise to upgrade!

Aside from the above, you need to edit crush and create two roots. There is no device-class feature for Giant.
http://cephnotes.ksperis.com/blog/2015/02/02/crushmap-example-of-a-hierarchical-cluster-map
Hi Alwin,

Which would be the recommended version for both proxmox and CEPH so that in the proxmox GUI i can see the pool size separately and not the entire cluster size.

In my current scenario of using proxmox 3.4 and ceph GIANT, I can only see cluster size in the GUI and not the pool size like SSD separately and SATA separately. Hence from you end I request you to recommend me reliable version of CEPH and proxmox to address my query.

Thanks in advance

Regards,
Pradeep. S
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!