Ceph 0.94 (Hammer) - anyone cared to update?

Discussion in 'Proxmox VE: Installation and configuration' started by ScOut3R, Apr 14, 2015.

  1. ScOut3R

    ScOut3R Member

    Joined:
    Oct 2, 2013
    Messages:
    55
    Likes Received:
    3
    Hi,

    I'm running the latest Proxmox 3.4 from the subscription repo, backed by a Ceph cluster running Giant. Hammer was recently released and I'm planning to upgrade my Ceph cluster but I would like to ask around if anyone has Proxmox running with Hammer already?
     
  2. udo

    udo Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 22, 2009
    Messages:
    5,835
    Likes Received:
    159
    Hi,
    in 0.94 are some important issues, which are solved with 0.94.1 (released yeasterday evening).
    I will update our cluster soon... but wan't to hear some experiences first on the mailing list.

    Udo
     
  3. ScOut3R

    ScOut3R Member

    Joined:
    Oct 2, 2013
    Messages:
    55
    Likes Received:
    3
    Hi,

    yes, I was careless, I meant 0.94.1. :) I would like to update because of the CRUSHMAP fix which isn't in Giant and I think it's affecting my cluster. I just afraid to upgrade because I'm not sure how the KVM stack would handle the new version and I don't have a test infrastructure at hand to test it out.
     
  4. spirit

    spirit Well-Known Member

    Joined:
    Apr 2, 2010
    Messages:
    3,323
    Likes Received:
    135
    I have upgraded from giant to 0.94.1 2days ago (ceph on proxmox), without any problem.

    vms don't need to be restarted
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  5. ScOut3R

    ScOut3R Member

    Joined:
    Oct 2, 2013
    Messages:
    55
    Likes Received:
    3
    Thanks!

     
  6. Konstantinos Pappas

    Konstantinos Pappas New Member

    Joined:
    Jan 7, 2015
    Messages:
    27
    Likes Received:
    0
    i confirm either, update 9 nodes from firefly to hammer without any problem, just in case test first demo server and after that to production
     
  7. adoII

    adoII Member

    Joined:
    Jan 28, 2010
    Messages:
    124
    Likes Received:
    0
    Hi all,

    What is your experience with the updated ceph cluster ? Is ceph hammer in general faster or has less latency with vms compared to ceph giant ? Did you notice any differences ?
     
  8. ScOut3R

    ScOut3R Member

    Joined:
    Oct 2, 2013
    Messages:
    55
    Likes Received:
    3
    I updated my 8 node cluster today without any trouble. It was seamless and relatively fast. Now my cluster is moving data to compensate for the CRUSH bug. :) I don't experience any performance improvements but the CRUSH bugfix is a big hit for us because we have different OSD (HDD) sizes and the weights weren't calculated correctly. I expect to see some improvement from this though.
     
  9. udo

    udo Well-Known Member
    Proxmox Subscriber

    Joined:
    Apr 22, 2009
    Messages:
    5,835
    Likes Received:
    159
    Hi,
    please report from improvements and how long your "rebuild" take.

    Udo
     
  10. ScOut3R

    ScOut3R Member

    Joined:
    Oct 2, 2013
    Messages:
    55
    Likes Received:
    3
    For the sake of clarity, I am running 1 OSD per HDD. I have two different node setups. One with 6 OSDs, 3 TB each, and one with 4 OSDs, 1 TB each. From the 3 TB node I have 3, that means 18 x 3 TB OSD in total. From the 1 TB node I have 5, that means 20 x 1 TB OSD in total. Originally, the cluster was running off the 3 TB nodes, making one OSD's weight 2.7, the whole node was 16.4. Adding the other 5 nodes I intended to increase the new OSDs' weight by 0.1 steps to control the load distribution. I was running Firefly when this addition occurred. Surprisingly, with 0.1 weight the new OSDs' capacity was ranging from 20% to 40% which was quite odd! I thought it was because the PGs were too big but soon Sage introduced a fix for Firefly regarding the CRUSH weight bug. Sadly, I was running Giant then and it hasn't received this bugfix. Anyway, upgrading to Hammer and enabling the CRUSH fix 1/3 of the cluster started to rebalance. This process finished in about 12 hours. After that started to increment the 1 TB OSDs' weight and the PG distribution seemed to be normal. Now I am at 0.7 weight and the used capacity is between 55% and 75%. I assume that the PGs are distributed more intelligently now and the smaller OSDs running more PGs than before and my cluster seems to be more responsive under high load.
     
  11. Sakis

    Sakis Member
    Proxmox Subscriber

    Joined:
    Aug 14, 2013
    Messages:
    119
    Likes Received:
    3
    When will pveceph utility have available hammer edition for installation?
    Code:
    pveceph install -version hammer 400 Parameter verification failed.version: value 'hammer' does not have a value in the enumeration 'dumpling, emperor, firefly, giant'pveceph install  [OPTIONS]
     
    #11 Sakis, Apr 24, 2015
    Last edited: Apr 24, 2015
  12. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,508
    Likes Received:
    400
    its already in (git). package will be uploaded the next days.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  13. Sakis

    Sakis Member
    Proxmox Subscriber

    Joined:
    Aug 14, 2013
    Messages:
    119
    Likes Received:
    3
  14. tom

    tom Proxmox Staff Member
    Staff Member

    Joined:
    Aug 29, 2006
    Messages:
    13,508
    Likes Received:
    400
    install firefly with pveceph install and then upgrade immediately to hammer (by changing the sources.list.d/ceph).

    or just wait for the new pveceph packages (a few days).
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  15. miha_r

    miha_r New Member

    Joined:
    May 1, 2014
    Messages:
    20
    Likes Received:
    0
    I upgrade such way my 4 nodes without any problem.
    next upgrade will be to debian 8 ;)
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice