Adding nodes with OSDs to a CEPH cluster improving reponse time / troughput ?

is data on Ceph reads are spread across OSDs to improve latency and read speed ? or the troughput is set by the maximum a single OSD can deliver ?

The more hosts, the more osds the better throughput youll have. Ceph performance relies on lots of things:

- CPU, MEMORY
- NETWORK (Bandwith, Latency)
- Storage Devices (Device Latency, Device Throughput)
- Pool-Setup (count of PGs, Replica Size)

Ceph Reads are done simultaneously, you can read from all sources. so the more replicas you have, the more osds your data you want to read is splitted (the more pgs, the more data is splitted across multiple osd) the more throughput youll have.
 
And write depend of only 1 osd ?

On ceph writes with SIZE 3 and MIN-SIZE 2. Data gets written twice, before ACK is send to client. Ceph writes are not simultaneously, they are one after another. The first replica gets written to the primary osds, and from that primary osds it gets written BY the osds to another osds on another host (twice). If one of the osds in your cluster is slow or broken, it can slow down the complete process of writing data to the ceph cluster, if this osd is involed.

1686831121930.png

see https://www.youtube.com/watch?v=Obtcatu3bG4 (German Webinar about Proxmox Ceph HCI)