Adding nodes with OSDs to a CEPH cluster improving reponse time / troughput ?

is data on Ceph reads are spread across OSDs to improve latency and read speed ? or the troughput is set by the maximum a single OSD can deliver ?

The more hosts, the more osds the better throughput youll have. Ceph performance relies on lots of things:

- CPU, MEMORY
- NETWORK (Bandwith, Latency)
- Storage Devices (Device Latency, Device Throughput)
- Pool-Setup (count of PGs, Replica Size)

Ceph Reads are done simultaneously, you can read from all sources. so the more replicas you have, the more osds your data you want to read is splitted (the more pgs, the more data is splitted across multiple osd) the more throughput youll have.
 
And write depend of only 1 osd ?

On ceph writes with SIZE 3 and MIN-SIZE 2. Data gets written twice, before ACK is send to client. Ceph writes are not simultaneously, they are one after another. The first replica gets written to the primary osds, and from that primary osds it gets written BY the osds to another osds on another host (twice). If one of the osds in your cluster is slow or broken, it can slow down the complete process of writing data to the ceph cluster, if this osd is involed.

1686831121930.png

see https://www.youtube.com/watch?v=Obtcatu3bG4 (German Webinar about Proxmox Ceph HCI)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!