Ceph Giant supported ?

Please share own experiences.
Is there a marked improvement compared with the firefly? (Does increased performance?)
Ready for production?
Do you think are now upgrade from giant firefly on or wait for the next release?

Thank U.
 
Please share own experiences.
Is there a marked improvement compared with the firefly? (Does increased performance?)
Ready for production?
Do you think are now upgrade from giant firefly on or wait for the next release?

Thank U.

Hi,

Giant (& Firefly) are developement releases, but considered enough stables for production (except CephFS, not yet recommanded for production).

I didn't benchmark yet but here are the changes from Firefly:

* RADOS Performance: a range of improvements have been made in the
OSD and client-side librados code that improve the throughput on
flash backends and improve parallelism and scaling on fast machines.
* CephFS: we have fixed a raft of bugs in CephFS and built some
basic journal recovery and diagnostic tools. Stability and
performance of single-MDS systems is vastly improved in Giant.
Although we do not yet recommend CephFS for production deployments,
we do encourage testing for non-critical workloads so that we can
better guage the feature, usability, performance, and stability
gaps.
* Local Recovery Codes: the OSDs now support an erasure-coding scheme
that stores some additional data blocks to reduce the IO required to
recover from single OSD failures.
* Degraded vs misplaced: the Ceph health reports from 'ceph -s' and
related commands now make a distinction between data that is
degraded (there are fewer than the desired number of copies) and
data that is misplaced (stored in the wrong location in the
cluster). The distinction is important because the latter does not
compromise data safety.
* Tiering improvements: we have made several improvements to the
cache tiering implementation that improve performance. Most
notably, objects are not promoted into the cache tier by a single
read; they must be found to be sufficiently hot before that happens.
* Monitor performance: the monitors now perform writes to the local
data store asynchronously, improving overall responsiveness.
* Recovery tools: the ceph_objectstore_tool is greatly expanded to
allow manipulation of an individual OSDs data store for debugging
and repair purposes. This is most heavily used by our QA
infrastructure to exercise recovery code.

See http://ceph.com/uncategorized/v0-87-giant-released/

Next stable release should be 0.93.
 
Hi,Giant (& Firefly) are developement releases,
That's wrong.Each named release (dumpling,emperor,giant,firefly,...) are stable release.But some of them are long-term support releases (on which ceph enterprise is done)(dumpling,firefly,hammer,...)with them you'll more update on the long time.For example, giant will have 2 or 3 bugfix updates, after that you need to upgrade to hammer.About giant, for me, the biggest improvement is threads sharding, which allow to use more cores by osd. (Very usefull with full ssd setup)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!