[SOLVED] 5Node x 10OSD CEPH Cluster. SATA/SAS Speeds.

mtalundzic

New Member
Feb 5, 2025
2
0
1
Configuration is a 5 node cluster with each host running the same configuration:

PowerEdge R730
384GB RAM
2xIntel Xeon E5-2697 v3 @ 2.60GHz
H730 Perc (in HBA Mode)
2x Sandisk LB1606R 1.6TB SSDs (OS)

[SIZE=4]2x Sandisk LB1606R 1.6TB SSDs (CEPH) (2 OSDs / node)[/SIZE]

Basically, the only oddball out in the configuration is that the first node, instead of having LB1606Rs for CEPH, they are WD Enterprise SATA SSDs. What I'm trying to figure out why the more expensive and way more capable SAS disks pale in comparison to performance vs. the SAS drives for CEPH? They are 15x slower!

osd.0 and osd.1 are the two SATA SSDs, the rest are the aforementioned SAS SSDs. Here's a benchmark running ceph tell osd.* benchon all OSDs.

[SIZE=5]osd.0:[/SIZE]
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 2.483143321,
[COLOR=rgb(184, 49, 47)][B] "bytes_per_sec": 432412344.03159124,[/B][/COLOR]
[COLOR=rgb(184, 49, 47)][B] "iops": 103.09513664998799[/B][/COLOR]
}
osd.1: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 2.4157855939999999,
[COLOR=rgb(184, 49, 47)][B] "bytes_per_sec": 444469006.96270978,[/B][/COLOR]
[COLOR=rgb(184, 49, 47)][B] "iops": 105.96966909473176[/B][/COLOR]
}
osd.2: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 34.047352431,
"bytes_per_sec": 31536720.106975533,
"iops": 7.5189399974287827
}
osd.3: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 33.833937822000003,
"bytes_per_sec": 31735644.536824081,
"iops": 7.566367277341862
}
osd.4: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 44.295657744000003,
"bytes_per_sec": 24240340.44613418,
"iops": 5.7793475261054468
}
osd.5: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 12.622314918000001,
"bytes_per_sec": 85066949.365111694,
"iops": 20.281541196134494
}
osd.6: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 38.823714494999997,
"bytes_per_sec": 27656854.527360704,
"iops": 6.5939079588319549
}
osd.7: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 34.999426978000002,
"bytes_per_sec": 30678840.104294691,
"iops": 7.3144054661499718
}
osd.8: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 26.43821982,
"bytes_per_sec": 40613242.166468985,
"iops": 9.6829514900371993
}
osd.9: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 33.150015410999998,
"bytes_per_sec": 32390386.872752577,
"iops": 7.7224700147515719
}
 
Last edited:
Answered my own question with a good old fashion SSD test. Basically, I removed two Sandisk Enterprise SAS SSDs and replaced them with two WD Enterprise SATA SSDs.

root@proxmox-05:~# ceph tell osd.8 bench
{
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 2.2974503569999998,
"bytes_per_sec": 467362361.37963265,
"iops": 111.42787012568299
}
root@proxmox-05:~# ceph tell osd.9 bench
{
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 2.2586309309999999,
"bytes_per_sec": 475394987.84983212,
"iops": 113.34299751516154
}

I am once again getting a solid ~450MB/s on the SATA SSDs vs. ~27MB/s with SAS SSDs. Win-Win as they are faster AND cheaper!

Moral of the story is: if you have a Dell R7x0 coupled with a PERC H730 in HBA mode. Use Enterprise SATA SSDs instead of Enterprise SAS SSDs CEPH configuration for maximum throughput and IOPs.