raid cards for ceph

RobFantini

Famous Member
May 24, 2012
2,042
110
133
Boston,Mass
Hello
from other threads and links* I've read ceph raid card should be able to support ' full JBOD ' or ' a pass-through mode ' .

We've a bunch of 3ware/Lsi cards which have worked well for us in the past , and want to utilize those in ceph clusters.
So this thread is to discuss which cards are OK to use with ceph and put tests results. Our needs are for as close to 100% up time as possible. Throughput needs to be decent.. with the card not interfering at all with data input or retrieval. We've at the most 100 users mostly doing cli type work.

Which test is correct to use for user feel of sluggishness ? Is that called latency ?


*http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/
*http://forum.proxmox.com/threads/18552-ceph-performance-and-latency
 
Last edited:
from
Code:
lspci|grep RAID
we have 5 of these cards:
Code:
3ware Inc 9650SE SATA-II RAID PCIe (rev 01)

per this: http://www.lsi.com/downloads/Public...s Common Files/LSI_3ware-9650SE_PB_072309.pdf

the card has ' ATA pass-through mode support '

So we'll use for now and test later..

TBD: test and post result

What test should be used?

Or are those cards not worth using and why?

PS: I do not minding purchasing newer cards if that makes sense.
 
Last edited:
Re: raid cards for ceph: PERC H710 Mini

I've 2 dell systems with PERC H710 Mini . Those are high end cards , but do not support jbod . it supports raid 0, 1, 5, 6, 10, 50, 60 .

lspci shows them as:
Code:
LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] (rev 05)

The firmware Dell uses on the card does not support jbod.

My question is how can this be best used for Ceph? Or should it not be used?
 
Re: raid cards for ceph: PERC H710 Mini

I'll also post questions to the ceph users list. Probably that is where they should have gone in the 1-st place.
 
Re: raid cards for ceph: PERC H710 Mini

I posted to ceph users and got 3 responses in 5 minutes.

'We just ended up creating a bunch of single disk RAID-0 units, since there was no jbod option available.'

'In my test cluster in systems with similar RAID cards, I create single-disk RAID-0 volumes.

That does the trick.'


' probably single-disk R0 configuration with writeback cache is a best
possible option.'


so I'll post raid card questions there.