[SOLVED] suggestions p420i raid controller with CEPH

silvered.dragon

Renowned Member
Nov 4, 2015
123
4
83
HI to all,
I'm planning a 3 node ceph cluster using HP gen8 servers with the p420i controller,

This particulary controller don't allow me any jbod or HBA configuration, the only way to use a single disk for buiding an OSD is to setup every single disk as a RAID0 configuration. Otherwise the disk not configured as raid0 will not showed in the gui or even with an fdisk command.

For now I adopted the following configuration

Every server has 7x600GB disk for OSDs (every disk is a single RAID0 configuration) and 2x146GB for proxmox(these are in a raid1 configuration)

For now is working but I'm worried about this,cause maybe in case of failure I will have unaxpected errors(I think that a RAID0 with one disk configuration is very different from an original jbod).
Anyone have tried this controller or have any suggestions?
Many thanks
 
yes I know about this configuration but then I will be unable to secure the proxmox system with a raid 1 config. But maybe you will suggest to me to use a zfs raid 1 for proxmox am I right?
 
If you want to go ZFS-way, disable p420i completely and get HBA (i.e. HP H220/H221)...
 
For ceph to setup the OSD disks like this is a big performance hit, as there is still the raid cache active and the controller may lie about the actual blocksize and so on. But a quick google search tells me, that it might be able to put the controlle into IT mode. Never tried, so no guarantees.

https://hardforum.com/threads/hp-dl380p-gen8-p420i-controller-hbamode.1852528/
this is a big trouble cause in hba mode seems that I cannot boot from this controller. So.. before purchasing a dedicated jbod interface I want to ask if "Physical Write Cache is disabled for the Smart Arrays(there is an option for this)" in single raid0 configuration I still will have the block size problem?
 
The is an assumption I made, in the end you need to test (for sure RAID0 is not the same as JBOD). Maybe you have some on board SATA ports, that you may be able to use. On the other hand, a HBA is not so expensive either.
 
The is an assumption I made, in the end you need to test (for sure RAID0 is not the same as JBOD). Maybe you have some on board SATA ports, that you may be able to use. On the other hand, a HBA is not so expensive either.
yes you are right but test is sure difficult cause there are several circumstances that we could not fully verify and find myself in trouble afterwards. An lsi card is about 150 euro so not really expensive,.. anyway you suggest a good point using the sata interface, but I'm worried about the performance using sata disk for the proxmox system..or you think that the performance of this cluster will not be affected cause the vms resides on the ceph's sas disks?
 
If you use the on board SATA ports for some "small" SSD RAID1 setup (ZFS RAID1), the performance should be good enough, if you host all VMs on Ceph or a different storage then your PVE install disks. After all the most IO that will be done on the system disks will be logging (Corosync, PVE services, Ceph, ...).
 
After 20 days testing, I have tried everything to break the ceph cluster in this configuration:
2x146GB 15k in raid1 hardware for the system
6x600GB 15k in singles disk raid0 hardware for ceph's bluestore osd

everything works very well. no problems at all and considering that I'm using non ssd disks as you can see in the attached pics I have a good performance. this is Just for updating other p410i users.
IMG_20171012_180316.jpg pvetest.JPG pveseq.JPG

a nice tip:
if you want to view smart status of the disks(temperature etc) directly from the web guy, you have to specify to smartmontools the type of disk(in this case is cciss,0) to fix this simply

Code:
nano /usr/share/perl5/PVE/Diskmanage.pm

locate

Code:
my $cmd = [$SMARTCTL, '-H'];
    push @$cmd, '-A', '-f', 'brief' if !$healthonly;
    push @$cmd, $disk;

and modify like this

Code:
my $cmd = [$SMARTCTL, '-H'];
    push @$cmd, '-A', '-f', 'brief' if !$healthonly;
    push @$cmd, '-d', 'cciss,0', $disk;

of course you can use others kind of controller for this like megaraid etc.
here is a pic of the working web gui
smart.JPG
 
  • Like
Reactions: Haider Jarral
After 20 days testing, I have tried everything to break the ceph cluster in this configuration:
2x146GB 15k in raid1 hardware for the system
6x600GB 15k in singles disk raid0 hardware for ceph's bluestore osd

everything works very well. no problems at all and considering that I'm using non ssd disks as you can see in the attached pics I have a good performance. this is Just for updating other p410i users.

Hello!

In the current Proxmox version 7.3 (with Ceph 17.2.5) does this solution still work?

I'm implementing an environment with your same scenario, however, when creating the OSDs, Ceph says it's not possible to create the OSDs, because the disks are "behind" a controller.
 
Hello!

In the current Proxmox version 7.3 (with Ceph 17.2.5) does this solution still work?

I'm implementing an environment with your same scenario, however, when creating the OSDs, Ceph says it's not possible to create the OSDs, because the disks are "behind" a controller.
Sorry for the late answer, anyway is still working after 7 years, I never loosed any data and it was h24 in a production enverinment, proxmox is always updated on latest possible version but my ceph is now 16.2.11 I don't remember this kind of error so probably is something that happened in ceph 17. Regards
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!