Configuration is a 5 node cluster with each host running the same configuration:
Basically, the only oddball out in the configuration is that the first node, instead of having LB1606Rs for CEPH, they are WD Enterprise SATA SSDs. What I'm trying to figure out why the more expensive and way more capable SAS disks pale in comparison to performance vs. the SAS drives for CEPH? They are 15x slower!
osd.0 and osd.1 are the two SATA SSDs, the rest are the aforementioned SAS SSDs. Here's a benchmark running
}
PowerEdge R730
384GB RAM
2xIntel Xeon E5-2697 v3 @ 2.60GHz
H730 Perc (in HBA Mode)
2x Sandisk LB1606R 1.6TB SSDs (OS)
[SIZE=4]2x Sandisk LB1606R 1.6TB SSDs (CEPH) (2 OSDs / node)[/SIZE]
Basically, the only oddball out in the configuration is that the first node, instead of having LB1606Rs for CEPH, they are WD Enterprise SATA SSDs. What I'm trying to figure out why the more expensive and way more capable SAS disks pale in comparison to performance vs. the SAS drives for CEPH? They are 15x slower!
osd.0 and osd.1 are the two SATA SSDs, the rest are the aforementioned SAS SSDs. Here's a benchmark running
ceph tell osd.* bench
on all OSDs.[SIZE=5]osd.0:[/SIZE]
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 2.483143321,
[COLOR=rgb(184, 49, 47)][B] "bytes_per_sec": 432412344.03159124,[/B][/COLOR]
[COLOR=rgb(184, 49, 47)][B] "iops": 103.09513664998799[/B][/COLOR]
}
osd.1: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 2.4157855939999999,
[COLOR=rgb(184, 49, 47)][B] "bytes_per_sec": 444469006.96270978,[/B][/COLOR]
[COLOR=rgb(184, 49, 47)][B] "iops": 105.96966909473176[/B][/COLOR]
}
osd.2: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 34.047352431,
"bytes_per_sec": 31536720.106975533,
"iops": 7.5189399974287827
}
osd.3: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 33.833937822000003,
"bytes_per_sec": 31735644.536824081,
"iops": 7.566367277341862
}
osd.4: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 44.295657744000003,
"bytes_per_sec": 24240340.44613418,
"iops": 5.7793475261054468
}
osd.5: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 12.622314918000001,
"bytes_per_sec": 85066949.365111694,
"iops": 20.281541196134494
}
osd.6: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 38.823714494999997,
"bytes_per_sec": 27656854.527360704,
"iops": 6.5939079588319549
}
osd.7: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 34.999426978000002,
"bytes_per_sec": 30678840.104294691,
"iops": 7.3144054661499718
}
osd.8: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 26.43821982,
"bytes_per_sec": 40613242.166468985,
"iops": 9.6829514900371993
}
osd.9: {
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 33.150015410999998,
"bytes_per_sec": 32390386.872752577,
"iops": 7.7224700147515719
}
Last edited: