create pool of selective osd in proxmox ceph cluster

Abhimnyu

New Member
Jan 7, 2019
9
1
1
33
We are setting up proxmox cluster here we are using SSD and HDD what to create two pools of SSD and HDD.

Please suggest right process to do it.
 
Thanks Tom for your reply,

I have done the bellow things.
1) Created the OSD's
2) Created the Device class and add the osd's in it.
3) created the new rule which is targeting the new device class
4) Created the Pool
5) assign the new crush rule to that pool.

but after assigning the new rule to the pool it reduce the space before assigning the rule it was showing 13G after assigning the rule it is showing 6.6G.

Please suggest.
 
but after assigning the new rule to the pool it reduce the space before assigning the rule it was showing 13G after assigning the rule it is showing 6.6G.
Yes, ofc. Run ceph df, it should show you that each class has its own total now. As data will only be distributed onto the respective class.
 
Thanks Alwin,

this might be very silly question but i did not understand, can u please explain the little and provide any refrains link. bellow is the output of "ceph df".
The pool "ceph-test" is getting reduced when we move it to new crush rule.

[root@test-1 my-cluster]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
72 GiB 63 GiB 9.1 GiB 12.68
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
.rgw.root 1 0 B 0 20 GiB 0
normal-pool 3 0 B 0 20 GiB 0
ceph-test 4 0 B 0 6.6 GiB 0
fast-pool 5 0 B 0 20 GiB 0
 
[root@test-1 my-cluster]# ceph df
Please use CODE tags for posting command output, it will keep the formatting. You can find them under the three dots (...) in the edit window.

GLOBAL:
SIZE AVAIL RAW USED %RAW USED
72 GiB 63 GiB 9.1 GiB 12.68
It seems you are not on Ceph Nautilus. What is your output of ceph versions?
 
Code:
[root@test-1 my-cluster]# ceph versions

{

    "mon": {

        "ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic (stable)": 3

    },

    "mgr": {

        "ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic (stable)": 1

    },

    "osd": {

        "ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic (stable)": 9

    },

    "mds": {},

    "overall": {

        "ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic (stable)": 13

    }

}
 
Last edited:
Hm... we never supported Ceph Mimic. If you are using Ceph hyperconverged on Proxmox, then please install our Ceph packages to have a supported setup.

But to answer your question. Ceph Mimic doesn't show yet, a per pool usage. With the rule you created, all data of the pool that uses this rule, will be placed only on the disks of the specified device class.
 
Herewith a sample of what it would look like:
Code:
[admin@kvm5d ~]# ceph df                                                                                                                                                                             RAW STORAGE:
    CLASS     SIZE       AVAIL      USED        RAW USED     %RAW USED
    nvme      17 TiB     16 TiB     1.1 TiB      1.1 TiB          6.33
    ssd       81 TiB     42 TiB      39 TiB       39 TiB         48.65
    TOTAL     99 TiB     58 TiB      40 TiB       41 TiB         41.15
 
POOLS:
    POOL                      ID     STORED      OBJECTS     USED        %USED     MAX AVAIL
    rbd_ssd                    0      10 TiB       3.78M      31 TiB     49.14        11 TiB
    cephfs_data                2      18 GiB       4.59k      54 GiB      0.16        11 TiB
    cephfs_metadata            3     5.4 MiB          60      17 MiB         0        11 TiB
    ec_nvme                   16      12 KiB           1      80 KiB         0       9.2 TiB
    rbd_nvme                  17     289 GiB      73.94k     866 GiB      5.21       5.1 TiB
    ec_compr_nvme             19     154 GiB      41.32k     258 GiB      1.61       9.2 TiB
    ec_ssd                    20         0 B           0         0 B         0        22 TiB
    ec_compr_ssd              21     4.9 TiB       1.74M     7.9 TiB     19.48        22 TiB
    device_health_metrics     22     4.0 MiB          20     4.0 MiB         0       5.1 TiB
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!