Ceph on Raid0

hoinz

New Member
Aug 14, 2019
1
0
1
33
Dear Proxmox Team,

we have 3 Dell Servers running Proxmox 6.0. Unfortunately we encountered an issue with the setup of Ceph OSD's on individual drives. The main problem is that the given Perc H710 mini adapter does not allow "IT-Mode" / JBOD Passthrough of individual drives, and so we're stuck with putting each Drive in it's own Raid1 or Raid0 configuration.
This seems to stop us from creating OSD's on the harddrives, since we assume the hardware details of the drives are masked by the raid controller. We are aware of potential performance-problems with intermediate raid-caches and differing block sizes, but would wish for a solution, since the drives themselves are individually adressed afterall.

If not generally applicable, we could patch the raid checks locally aswell. Then it would be really helpful to know where this check happens?

Thank you in advance!
 
I run my home and work lab cluster like this. I set raid-0 for each drive, then quick init each one. If you don't initialize them after setting them up in the perc controller they won't be recognized for use by ceph.
 
For more adventurous - H710 Minis can be flashed to IT mode, and it seems it is not too hard to achieve.
 
Can you please share how to do that?
We have Dell 720xd servers with perc h710 mini, but there is no option to enable pass-through for disks! :(
 
It seems this is the guide:
https://fohdeesha.com/docs/perc/

There also seems to be video info:
https://www.youtube.com/watch?v=J82s_WYv3nU

I have not done this on dell cards, but have done it Supermicro cards, that are same family LSI chips. All that said - supermicro cards are just easy to flash since there is no limitation on what cards to use. Dell does lock the cards out that he thinks is not HP pci id cards. The guides work around the problem.

NB! You should have physical access to server since It is good idea to remove the battery since it could interfere with flashing process and in IT mode it will not be used anyway and of course it will have minimal wear out if lying on the shelf instead in the working server.
 
Last edited:
Worked as expected and servers now using Ceph like they should in JBOD mode - thanks again! :);)
May I ask if a HDD fails how can you figure out in which slot is the failed HDD? We tested this solution, but as you replace the firmare to IT then the LEDs are not working, so if we had a failed disk we wasn't able to figure iut which one has to be replaced.
 
May I ask if a HDD fails how can you figure out in which slot is the failed HDD? We tested this solution, but as you replace the firmare to IT then the LEDs are not working, so if we had a failed disk we wasn't able to figure iut which one has to be replaced.
Hi!
As far as i remember, leds were working! I sent those servers over to the client, so i have no option to take a look at them any more! :(
 
Weird, as I know if the IT firmware is installed then the LEDs are not working, we tried on tens of DL360/380 Gen8/9 servers and the same. Thats why I asked. Do wou remember what Gen were those servers?
 
DELL PowerEdge R720xd servers were used!
Srsly - cant remember about leds :(
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!