CEPH SSD Pool

Discussion in 'Proxmox VE: Installation and configuration' started by mcdowellster, Sep 10, 2018.

  1. mcdowellster

    mcdowellster New Member

    Joined:
    Jun 13, 2018
    Messages:
    12
    Likes Received:
    3
    Hello,

    I've been testing ceph in my dev cluster for a bit now and I've finally added a few SSDs. I edited the CRUSH map to force the default replicated rule to use HDD. I created a new SSD rule for type SSD and created a new pool.

    If I write any data to the pool I instantly get a health warning of objects misplaced. Any thoughts?

    Code:
    ceph -s
      cluster:
        id:     e791fb44-9c38-4e86-8edf-cdbc5a3f7d63
        health: HEALTH_WARN
                5/567160 objects misplaced (0.001%)
    
      services:
        mon: 3 daemons, quorum FEDERATION,DS9,isolinear
        mgr: isolinear(active), standbys: FEDERATION, DS9
        osd: 10 osds: 10 up, 10 in; 64 remapped pgs
    
      data:
        pools:   2 pools, 314 pgs
        objects: 184k objects, 738 GB
        usage:   2224 GB used, 22920 GB / 25145 GB avail
        pgs:     5/567160 objects misplaced (0.001%)
                 250 active+clean
                 64  active+clean+remapped
    
      io:
        client:   25361 kB/s rd, 71561 B/s wr, 7 op/s rd, 8 op/s wr
    Edit: there is only one host with storage. It is configured for single node. The other monitors were added to use storage from isolinear (star trek nerd here).


    Code:
    # begin crush map
    tunable choose_local_tries 0
    tunable choose_local_fallback_tries 0
    tunable choose_total_tries 50
    tunable chooseleaf_descend_once 1
    tunable chooseleaf_vary_r 1
    tunable chooseleaf_stable 1
    tunable straw_calc_version 1
    tunable allowed_bucket_algs 54
    
    # devices
    device 0 osd.0 class hdd
    device 1 osd.1 class hdd
    device 2 osd.2 class hdd
    device 3 osd.3 class hdd
    device 4 osd.4 class hdd
    device 5 osd.5 class hdd
    device 6 osd.6 class hdd
    device 7 osd.7 class hdd
    device 8 osd.8 class ssd
    device 9 osd.9 class ssd
    
    # types
    type 0 osd
    type 1 host
    type 2 chassis
    type 3 rack
    type 4 row
    type 5 pdu
    type 6 pod
    type 7 room
    type 8 datacenter
    type 9 region
    type 10 root
    
    # buckets
    host isolinear {
        id -3        # do not change unnecessarily
        id -4 class hdd        # do not change unnecessarily
        id -5 class ssd        # do not change unnecessarily
        # weight 24.556
        alg straw2
        hash 0    # rjenkins1
        item osd.0 weight 1.819
        item osd.1 weight 1.819
        item osd.2 weight 2.728
        item osd.3 weight 2.728
        item osd.4 weight 3.638
        item osd.5 weight 3.638
        item osd.6 weight 3.638
        item osd.7 weight 3.638
        item osd.8 weight 0.455
        item osd.9 weight 0.455
    }
    root default {
        id -1        # do not change unnecessarily
        id -2 class hdd        # do not change unnecessarily
        id -6 class ssd        # do not change unnecessarily
        # weight 24.556
        alg straw2
        hash 0    # rjenkins1
        item isolinear weight 24.556
    }
    
    # rules
    rule replicated_rule {
        id 0
        type replicated
        min_size 1
        max_size 10
        step take default class hdd
        step choose firstn 0 type osd
        step emit
    }
    rule FastStoreage {
        id 1
        type replicated
        min_size 1
        max_size 10
        step take default class ssd
        step chooseleaf firstn 0 type host
        step emit
    }
    
    # end crush map
    
    Server View
    Logs
    
     
  2. mcdowellster

    mcdowellster New Member

    Joined:
    Jun 13, 2018
    Messages:
    12
    Likes Received:
    3
    Never mind I found my mistake: "step chooseleaf firstn 0 type host" should be "
    step choose firstn 0 type osd"

    Sometimes you just need to see your mistake in a forum =:)
     
    AlexLup likes this.
  3. AlexLup

    AlexLup Member

    Joined:
    Mar 19, 2018
    Messages:
    50
    Likes Received:
    4
    A word of warning, running with 1 replica is not recommended!
     
  4. mcdowellster

    mcdowellster New Member

    Joined:
    Jun 13, 2018
    Messages:
    12
    Likes Received:
    3
    Naturally!

    I'm currently testing. I was running a pure ZFS three way mirror but I finally got a decent system with enough removable drive caddies to give this a test run. This might sound strange but I love how the hashing makes all the drive activity lights blink.

    Anyways, I backup all data to external storage so data loss potential has been mitigated a bit.
     
    AlexLup likes this.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice