CEPH placement group and storage usefull capacity

Discussion in 'Proxmox VE: Installation and configuration' started by Yvon, Dec 21, 2018.

  1. Yvon

    Yvon Member

    Joined:
    Dec 20, 2018
    Messages:
    32
    Likes Received:
    0
    Alright.
    I think I cleared out lot of questions, thanks a lot again !
     
  2. Yvon

    Yvon Member

    Joined:
    Dec 20, 2018
    Messages:
    32
    Likes Received:
    0
    Alright.
    By the way, is cache tiering interesting with replicated pools (I will store VMs on it ) since I'll mostly use hard drives (maybe SSDs for logs) ?
    I may invest in few SSDs to create a cache tiering pool to increase performances.
     
  3. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    2,356
    Likes Received:
    213
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  4. Yvon

    Yvon Member

    Joined:
    Dec 20, 2018
    Messages:
    32
    Likes Received:
    0
    Nevermind :
    I found that yesterday a few minutes after posting my question...
     
  5. Yvon

    Yvon Member

    Joined:
    Dec 20, 2018
    Messages:
    32
    Likes Received:
    0
    So for now I think I'll stick to hard drives and maybe SSDs for journals.

    But I wonder what is written in those journals. Is it logs, metadata, will storing journals on the OSD really inpact the performances ?
     
  6. Yvon

    Yvon Member

    Joined:
    Dec 20, 2018
    Messages:
    32
    Likes Received:
    0
    Seems like a good idea since running applications that needs high I/O on spinning disks would be nonsense. I thought about something like this : upload_2019-1-16_16-23-52.png

    Obviously I'll need to modify the CRUSH map to map a specific pool to a specific type of OSD.

    I found what I was looking for https://forum.proxmox.com/threads/ceph-ssd-and-hdd-pools.42032/
     
    #26 Yvon, Jan 16, 2019
    Last edited: Jan 16, 2019
  7. Yvon

    Yvon Member

    Joined:
    Dec 20, 2018
    Messages:
    32
    Likes Received:
    0
    rule replicated_ssd {
    id 2
    type replicated
    min_size 1
    max_size 10
    step take default class ssd
    step chooseleaf firstn 0 type host
    step emit

    }

    I'm not sure about what the two lines in red does.
     
  8. Yvon

    Yvon Member

    Joined:
    Dec 20, 2018
    Messages:
    32
    Likes Received:
    0
    Asides from a SSD only pool, is possible when creating a pool to target specifically some OSDs ? Or does it goes against the fundamental princip of CEPH wich is to dynamically rebalance data ?
     
  9. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    2,356
    Likes Received:
    213
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  10. Yvon

    Yvon Member

    Joined:
    Dec 20, 2018
    Messages:
    32
    Likes Received:
    0
    Yeah, so you can make a pool wich targets a specific class (HDD, SSD, NVME) but you can't specifically target an OSD by his ID or his name in the CRUSH map wich makes sense since an average production cluster hosts way more than 10 OSD.

    By the way SAS and SATA hard drives would still be considered as HDD by CEPH ?
    Am I right ?
     
    #30 Yvon, Jan 18, 2019
    Last edited: Jan 21, 2019
  11. Yvon

    Yvon Member

    Joined:
    Dec 20, 2018
    Messages:
    32
    Likes Received:
    0
    Is it possible to select on which OSD your pool will be affected (in the case I stay with a hard drive only cluster) ?

    Targeting an host instead of a class would be good enough.
     
    #31 Yvon, Jan 21, 2019
    Last edited: Jan 21, 2019
  12. Alwin

    Alwin Proxmox Staff Member
    Staff Member

    Joined:
    Aug 1, 2017
    Messages:
    2,356
    Likes Received:
    213
    OSD IDs are reusable and not fixed. When you edit the CRUSH map then things like this can be possible. If you don't have a deep understand of what will be happening with your data, I advise against it. To give you an idea and older post, but still valid for the most part.
    http://cephnotes.ksperis.com/blog/2015/02/02/crushmap-example-of-a-hierarchical-cluster-map

    See above.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  13. Yvon

    Yvon Member

    Joined:
    Dec 20, 2018
    Messages:
    32
    Likes Received:
    0
    Thanks Alwin !

    From what I've read, the data placement and replication wich such scenario would be awful.

    https://www.sebastien-han.fr/blog/2012/12/07/ceph-2-speed-storage-with-crush/

    This article looks promising, without the cons of the uneven replication of the link you posted.
    Since OSD have a number (device 0 osd.0 class, hdd device 1 osd.1 class hdd), if an OSD fails and gets removed, does the numbers of all remaining OSD stay the same or are they decremented by 1 ?
     
    #33 Yvon, Jan 22, 2019
    Last edited: Jan 22, 2019
  14. Yvon

    Yvon Member

    Joined:
    Dec 20, 2018
    Messages:
    32
    Likes Received:
    0
    I tried to agregate OSD by buckets likes this :
    pool ssd {
    id -9
    alg straw2
    hash 0 # rjenkins1
    item osd.0 weight 0.455
    item osd.2 weight 0.454

    }

    pool sas {
    id -10
    alg straw2
    hash 0 # rjenkins1
    item osd.1 weight 0.455
    item osd.3 weight 0.454
    }

    with a rule for each of them :

    rule ssd {
    ruleset 3
    type replicated
    min_size 1
    max_size 10
    step take ssd
    step choose firstn 0 type osd
    step emit
    }

    rule sas {
    ruleset 4
    type replicated
    min_size 1
    max_size 10
    step take sas
    step choose firstn 0 type osd
    step emit
    }

    But as soon as I want to recompile the crush map I get this error : bucket type 'pool' is not defined
     
  15. Yvon

    Yvon Member

    Joined:
    Dec 20, 2018
    Messages:
    32
    Likes Received:
    0
    I found why ceph prompted an error message : at the beginning of of crush map, I need to add pool to the types of buckets :

    # types type 0 osd
    type 1 host
    type 2 chassis
    type 3 rack
    type 4 row
    type 5 pdu
    type 6 pod
    type 7 room
    type 8 datacenter
    type 9 region
    type 10 root
    type 11 pool
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice