Ceph HCI on fresh install

Mar 20, 2020
47
13
8
49
France
Hi all !

My totally new POC HCI cluster is up and running. Now I'm trying to configure CEPH.

So for My 3 nodes (1 SSD and 4 HDD each)
+ I have a dedicated network.
+ I installed a monitor & a manager on each nodes,
+ I created an OSD for each disk so 15 OSD so far.

--------------

Here is my ceph usage scenario : I have an application developped on PaaS w/ a S3 bucket. App is working on a k8s fashion + some external services like a posgres BDD + HA PROXY for K8s and RGW + 3 RGW + rancher - helm for installation. so I need :
+ a volume for my VM supporting the app (a few To w/ 2 time protection) => RBD
+ a volume for my S3 style (10To 3 time protected for the sake of users DATA) => RGW.
+ a volume for persistent volumes on my k8s, (a few hundred Go w/ 2 time protection) => rooked CephFS
+ a volume for DB "backup" (a few hundred Go w/ 3 time protection) => CephFS

----------------
How to achieve this ? :)
Volumes are referenced as Pool with associate PG am I wrong ?
My instinct told me to create crush rules first but in documentation it seems that creating pool is the first option ?
 
I don't fully get what you try to accomplish or how you think that would map down to Ceph. I will explain how Ceph works in combination with PVE and that will hopefully give you the information you need :)

At the "bottom" you have the OSDs which just store objects. For easier accounting, objects are grouped in placement groups (PG). Placement groups are part of a pool.

PVE is not writing directly to objects but is using two different abstraction layers that Ceph provides. One is the rbd (rados block device) which provides virtual block devices that can be used for the disks of guests.

The other is CephFS which provides a Posix file system on top of Ceph.

If you now have different types of OSDs, HDD and SSD for example, you can create different CRUSH rules that specify an OSD type as limit. Once you assign such a device type specific rule to a pool, it will start to use only these OSDs and move over any data that is on the wrong device types.

The Ceph documentation has a section on how to create these rules and how to assign them to a pool: https://docs.ceph.com/en/latest/rados/operations/crush-map/#device-classes

This means that you could have two RBD pools, one on which to store the faster guests on SSDs and one for the slower guests to be stored on the HDDs. You can also assign such a rule to the pools used for the CephFS (cephfs_data and cephfs_metadata).
 
Thanks for this Aaron, after a lot of reading on ceph & proxmox documentation & forum / blog on this :
I now have my OSD, pools / rules & volumes for VMs. For CephFS, after some thiniking around future architecture, i'll not use it at this time. I'm now checking for S3 / RGW configuration & documentation.

I'll try to achieve this :
1 pool w/ dedicated hdd based on crushmap rule (already created) rulehdd => OK
3 RGW installed on my 3 nodes => TBD
1 HAProxy to balance between them => TBD
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!