Ceph Performance

ermanishchawla

Well-Known Member
Mar 23, 2020
332
37
48
38
I have setup 4 Node cluster with 12 OSD ( 3 OSD per server). In the performance benchmark report, MTU size is mentioned as 9200 whereas if I set MTU as 9200
ceph -s commands hangs, and if i put it back to 1500, it work seamlessly.
Just curious to know any performance benefits of going to MTU 9200
 
I have setup 4 Node cluster with 12 OSD ( 3 OSD per server). In the performance benchmark report, MTU size is mentioned as 9200 whereas if I set MTU as 9200
Which report do you mean? This one? -> https://www.proxmox.com/en/downloads/item/proxmox-ve-ceph-benchmark

Just curious to know any performance benefits of going to MTU 9200
The MTU 9000 (jumbo frames) will give Ceph a performance increase since it can transport more payload with one packet.
 
I am seeing poor write performance, any points for the reason

rados bench -p Test 60 write -b 4M -t 32 --no-cleanup

Total time run: 60.1912
Total writes made: 6083
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 404.245
Stddev Bandwidth: 95.555
Max bandwidth (MB/sec): 560
Min bandwidth (MB/sec): 88
Average IOPS: 101
Stddev IOPS: 23.8887
Max IOPS: 140
Min IOPS: 22
Average Latency(s): 0.316399
Stddev Latency(s): 0.237052
Max latency(s): 2.28625
Min latency(s): 0.0666188


Read performance is extremely good

rados bench -p Test 10 rand -t 32 --no-cleanup

Total time run: 10.1146
Total reads made: 3783
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 1496.05
Average IOPS: 374
Stddev IOPS: 15.73
Max IOPS: 387
Min IOPS: 334
Average Latency(s): 0.0846853
Max latency(s): 0.456234
Min latency(s): 0.0131931
 
Write performance will always be lower than read performance. Ceph clients write to only on OSD (primary) and this OSD will take care of the replication. After all writes have been successfully written, the OSD returns with the ACK. In contrary to reads, where all OSDs send data in parallel.
1586415702392.png
https://docs.ceph.com/docs/master/architecture/
 
Write performance will always be lower than read performance. Ceph clients write to only on OSD (primary) and this OSD will take care of the replication. After all writes have been successfully written, the OSD returns with the ACK. In contrary to reads, where all OSDs send data in parallel.
View attachment 16324
https://docs.ceph.com/docs/master/architecture/

Thanks buddy, this is what I was looking for. I was just trying to benchmark on different drives to understand the pattern
 
  • Like
Reactions: Alwin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!