Ceph 0.94 (Hammer) - anyone cared to update?

ScOut3R

Member
Oct 2, 2013
55
5
6
Hi,

I'm running the latest Proxmox 3.4 from the subscription repo, backed by a Ceph cluster running Giant. Hammer was recently released and I'm planning to upgrade my Ceph cluster but I would like to ask around if anyone has Proxmox running with Hammer already?
 
Hi,

I'm running the latest Proxmox 3.4 from the subscription repo, backed by a Ceph cluster running Giant. Hammer was recently released and I'm planning to upgrade my Ceph cluster but I would like to ask around if anyone has Proxmox running with Hammer already?
Hi,
in 0.94 are some important issues, which are solved with 0.94.1 (released yeasterday evening).
I will update our cluster soon... but wan't to hear some experiences first on the mailing list.

Udo
 
Hi,

yes, I was careless, I meant 0.94.1. :) I would like to update because of the CRUSHMAP fix which isn't in Giant and I think it's affecting my cluster. I just afraid to upgrade because I'm not sure how the KVM stack would handle the new version and I don't have a test infrastructure at hand to test it out.
 
Hi all,

What is your experience with the updated ceph cluster ? Is ceph hammer in general faster or has less latency with vms compared to ceph giant ? Did you notice any differences ?
 
I updated my 8 node cluster today without any trouble. It was seamless and relatively fast. Now my cluster is moving data to compensate for the CRUSH bug. :) I don't experience any performance improvements but the CRUSH bugfix is a big hit for us because we have different OSD (HDD) sizes and the weights weren't calculated correctly. I expect to see some improvement from this though.
 
I updated my 8 node cluster today without any trouble. It was seamless and relatively fast. Now my cluster is moving data to compensate for the CRUSH bug. :) I don't experience any performance improvements but the CRUSH bugfix is a big hit for us because we have different OSD (HDD) sizes and the weights weren't calculated correctly. I expect to see some improvement from this though.
Hi,
please report from improvements and how long your "rebuild" take.

Udo
 
Hi,
please report from improvements and how long your "rebuild" take.

Udo

For the sake of clarity, I am running 1 OSD per HDD. I have two different node setups. One with 6 OSDs, 3 TB each, and one with 4 OSDs, 1 TB each. From the 3 TB node I have 3, that means 18 x 3 TB OSD in total. From the 1 TB node I have 5, that means 20 x 1 TB OSD in total. Originally, the cluster was running off the 3 TB nodes, making one OSD's weight 2.7, the whole node was 16.4. Adding the other 5 nodes I intended to increase the new OSDs' weight by 0.1 steps to control the load distribution. I was running Firefly when this addition occurred. Surprisingly, with 0.1 weight the new OSDs' capacity was ranging from 20% to 40% which was quite odd! I thought it was because the PGs were too big but soon Sage introduced a fix for Firefly regarding the CRUSH weight bug. Sadly, I was running Giant then and it hasn't received this bugfix. Anyway, upgrading to Hammer and enabling the CRUSH fix 1/3 of the cluster started to rebalance. This process finished in about 12 hours. After that started to increment the 1 TB OSDs' weight and the PG distribution seemed to be normal. Now I am at 0.7 weight and the used capacity is between 55% and 75%. I assume that the PGs are distributed more intelligently now and the smaller OSDs running more PGs than before and my cluster seems to be more responsive under high load.
 
When will pveceph utility have available hammer edition for installation?
Code:
pveceph install -version hammer 400 Parameter verification failed.version: value 'hammer' does not have a value in the enumeration 'dumpling, emperor, firefly, giant'pveceph install  [OPTIONS]
 
Last edited:
its already in (git). package will be uploaded the next days.
 
install firefly with pveceph install and then upgrade immediately to hammer (by changing the sources.list.d/ceph).

or just wait for the new pveceph packages (a few days).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!