Does a RAID card in JBOD mode make any difference in Ceph or any other storage? From running some benchmarks on Ceph OSDs it appears that it does. Take a look at the image below:
This is a 6 node Ceph cluster with 4 OSD in each. Node 17, 18, 19 got cacheless RAID driving each OSDs. Node 20,21,22 has no RAID card. All OSDs are directly connected to motherboard SATA port. The write performance is noticeable. The entire storage also got slower after adding 5th and 6th node. All nodes got identical motherboard and CPU. The only variables are the RAID cards.
I ordered some RAID cards to install in node 20,21 and 22 then run same benchmark to see how the number changes.
Any thoughts?
This is a 6 node Ceph cluster with 4 OSD in each. Node 17, 18, 19 got cacheless RAID driving each OSDs. Node 20,21,22 has no RAID card. All OSDs are directly connected to motherboard SATA port. The write performance is noticeable. The entire storage also got slower after adding 5th and 6th node. All nodes got identical motherboard and CPU. The only variables are the RAID cards.
I ordered some RAID cards to install in node 20,21 and 22 then run same benchmark to see how the number changes.
Any thoughts?