Opinion about hardware

Fabio Araujo

New Member
Aug 9, 2016
17
0
1
48
Hi there.

I have studying to create a cluster with 3 nodes and ceph, I saw in the ceph`s page an article about use with raid controller https://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/

The proxmox recommend don`t use raid controller, but is difficult and expensive find a motherboard with 10 SATA 6GB, so If I use the following configuration the performance will be good?

  • Motherboard Intel or Supermicro
  • Dual Xeon E2600 Six core or higher
  • 64 GB of ram DDR3 ECC
  • Chenbro chassis 08 bays
  • Raid controller (LSI 9207 or higher) in JBOD
  • 04x SAS 04 TB 7200 to create a slower and cheap pool
  • 04x SSD Enterprise 512GB to create a faster and more expensive pool
  • 2x SSD Enterprise 240GB connected to the mainboard, one to the system and other to journals
  • NIC: Intel X520-DA2 10GBE to ceph network

I will appreciate your opinion about that.

Thanks..
 
Last edited:
Looks like the 9207 is a proper HBA, so that's perfect for the job. It looks like there is one review on that card regarding firmware version 20 and Linux software RAIDs, so if you're using mdadm, watch out for that.

Other than that, this looks like one sexy setup.
 
Ok, I`ll see this controler, I have a Fujitsu server, If I use discs em raid0 is bad idea?

Many thanks.
 
Yes raid0 is a raid.
Also, JBOD is a Raid.

I Read in Ceph`s page that using raid0, 1 by 1 the performance is good, I made a test, I have 5 disks SAS 76GB 15k and created one VD by each drive, the speed mensurate was 178mb/s.

What do you think about this? My server is a Fujitsu RX300 s6 with raid controller D2616.

Many thanks.
 
Our practice with ceph has use teaches, do not use a raid even if it locks good at the beginning.
You will get undefined problems like a pg out of sync or the performance drops , etc.
 
  • Like
Reactions: someonelse