CEPH and Disks Attached to Raid Controllers.

tutaencrypter

New Member
Oct 13, 2024
6
0
1
Hi all,

Multiple disk attached to raid on typical enterprise / pro grade hardware such as Gen10 DL380 ...

Is the work around to use the individual disks in Raid-0 configs? And is this supported please.

If this is not supported what is the typical configuration for SDS in scenarios with kit that has pro raid controlers..

Thank you!
 
Hello @tutaencrypter,

In https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster is mentioned to avoiding RAID. Also there is post from Proxmox Staff explained why to avoid any raid https://forum.proxmox.com/threads/proxmox-ceph-cluster-no-raid.83752/

Historically I avoided RAID 0 whenever is possible, as is tent to fail and data loss is imminent, with single disk failure in raid pool.

With CEPH you can add many disk (OSD), so you will increase redundancy and also gain the space.

Theoretically if the solution will work, with the RAID 0 failure on one OSD (if we follow OSD / disk rule), it means start of making another copy of all data which were on RAID 0 to another CEPH OSDs / nodes (depdnes ano how many nodes / OSDs. In results there will be overlap with synchronization traffic and loose of capacity which is capacity of whole RAID 0 pool. Once you bring up back RAID 0, same overlapping synchronization. The whole solutions will be more unstable, tenting to failure and you will lose the benefits of CEPH.

If I may ask, what is reason using of RAID 0, to fully understand your situation?

Thank you

Lukas
 
Last edited:
  • Like
Reactions: tutaencrypter
Can multiple raid 0 disks work?
Yes, and no.

The problem with this approach is that single disk raid0 luns is that it is still controller bridge devices, which means the LBAs presented to the host are not the actual disk LBAs, and is subject to controller caching logic even if you turn it off/writethrough. It will work, but would be subject to various problems stemming from multiple layers of cache, write amplification, false faults, etc. I know this is common practice with vmware but its not fault free in that environment either ;)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!