[SOLVED] The installer could not find a supported hard disk (Dell R620 / H730 PERC Mini)

tuathan

Member
May 23, 2020
52
6
8
I have tested out Proxmox (most recent release) using nested virtualization and now want to test it on a physical node: a Dell R630 server which has a SATA and 2 SAS drives installed. I create a bootable iso on a USB drive (DD mode) and booted to this and then selected the option to install but get the error that there are no supported hard disks. I'm not sure if I need to configure the hardware (such as RAID) first or how to resolve this? I have seen a few posts which mention installing proxmox on R630. Thanks
 
Have you tried any other ISO installer (plain Debian for example) already?
 
Have you tried any other ISO installer (plain Debian for example) already?

No but first I had a look at the H730 perc mini raid controller settings and changed the controller from RAID to HBA mode, rebooted the server and my SATA and 2 SAS disks were detected by the Proxmox installer! I then successfully installed Proxmox onto the SATA drive. Thanks.
 
Hi,
We ran more or less into the same problem, now running a Raid 5 on a HP sas controller, on a non HP machine...means console coding, no boot setup...
Once the Raid configs we're available, Proxmox picked it right up.
due to the initial problems, proxmox is now sitting on a sata drive, on the onboard sata controller. ( running on a std mobo )
 
Hi,
We ran more or less into the same problem, now running a Raid 5 on a HP sas controller, on a non HP machine...means console coding, no boot setup...
Once the Raid configs we're available, Proxmox picked it right up.
due to the initial problems, proxmox is now sitting on a sata drive, on the onboard sata controller. ( running on a std mobo )

We are planning a multi node CEPH cluster (storage pool with many SAS drives) so didn't think the HW RAID would be neccesary? (or might even be a bad idea-from what I've read) What storage options are you using in Proxmox?
 
I'm not really into the ZFS, nor btrf thing, and prefere hardware raid.
Agreed, hardware Raid makes it harder, but it will offload your processor(s) anyway, and for what I understand, ZFS would need to require you also, to switch off write cache.
In our case, we are working with an amount of 'nas' shelves, all SAS , all connected over fiber , while a second system will run will all discs attached to server, over SAS fabric.
If you go 12G, most controllers are costing anyway, so,the 9365-28i or 9361-16i pops up in my mind.
now, I come out of the scsi times, and I keep thinking tight control to your discs are still preferable.
can you scrub data on zfs, or isn't it necessary ?
anyway, with a bit decent raid controller, you still can control a multitude of discs ( the HP P420 at least takes happely 256discs ), with expanders.)
Soon, all our stuff will run on Raid 6 or Raid 60, with spares, unless I can be convinced of HBA ( zfs, etc )
 
  • Like
Reactions: tuathan
Actually the RAID controller should be great help for Ceph. Especially for BlueStore, which doesn't use SSD for data, but for metadata only.
The H710/H730 Mini have 512MB DDR3, with a battery backup to protect that RAM from power failures.
1) In the RAID controller, make sure you are using "Writeback" write caching policy, and not "Writethrough". With Writeback, it behaves like this:
OS sends a write command -> RAID controller writes to DDR3 RAM -> RAID controller reports “complete” -> RAID controller writes from DDR3 RAM to the disk at its own pace.
2) In the RAID controller, make sure to disable "Read-Ahead", this hurts random I/O read performance, because there is already the kernel's read-ahead. If you're using FileStore, its even worse, you will have 3 layers of read-ahead instead of 2: kernel, osd xfs(another kernel read-ahead), raid controller.
 
  • Like
Reactions: CryptoVibe
I'm a little confused then as I disabled RAID on the controller to install Proxmox (get it to recognise my disks). Can I re-enable? Or reinstall Proxmox with the Raid controller configured as you mentioned above? The way I was thinking about it CEPH is effectively doing it's own software version of this data redundancy across the disks/cluster. Maybe I jumped the gun on "resolving" this one.
 
I'm a little confused then as I disabled RAID on the controller to install Proxmox (get it to recognise my disks). Can I re-enable? Or reinstall Proxmox with the Raid controller configured as you mentioned above? The way I was thinking about it CEPH is effectively doing it's own software version of this data redundancy across the disks/cluster. Maybe I jumped the gun on "resolving" this one.
I can recommend this place for help with hardware raid controllers: https://kb.open-e.com/19/
There is many articles about various problems and recommended settings for optimal performance in different use cases.
 
We are planning a multi node CEPH cluster (storage pool with many SAS drives) so didn't think the HW RAID would be neccesary?

Specific needs may vary, but generally your idea is correct and hardware RAID is not necessary for Ceph.

Proxmox VE Administration Guide:
RAID controller are not designed for the Ceph use case and may complicate things and sometimes even reduce performance (...)

Red Hat Ceph Storage Hardware Guide:
(...) RAID is an unnecessary expense. Additionally, a degraded RAID will have a negative impact on performance.
 
This is interesting. But RAID in general is a big subject, with many types of hardware and settings.
Also, i believe the links are more relevant to FileStore, which already has double-writing when used with a journal, thereby no need for an additional write caching layer.
This is not the situation with BlueStore. As far as i know, with BlueStore, only the metadata goes to the SSD. The data is written directly to the block device, no journaling (unlike with FileStore).

I personally use the H710 with HDDs (and SSD for the DB).
With the H710 in RAID mode, the only way to make the disks appear to the system, is to configure them with standalone RAID-0. However, it doesn't mean that its running in degraded mode.
Also, i use these settings, to interfere less with Ceph, and to take advantage of the write buffering of the RAID controller:
- I/O Mode: Direct I/O (don't use cached I/O here)
- Read-Ahead: Disabled (hurts performance, be transparent)
- Write caching policy: Writeback (buffers writes into the DDR3 RAM)

I haven't had any problems so far with this setup, and it works well, but thats my experience and others may vary.

Specific needs may vary, but generally your idea is correct and hardware RAID is not necessary for Ceph.

Proxmox VE Administration Guide:


Red Hat Ceph Storage Hardware Guide:
 
I'm a little confused then as I disabled RAID on the controller to install Proxmox (get it to recognise my disks). Can I re-enable? Or reinstall Proxmox with the Raid controller configured as you mentioned above? The way I was thinking about it CEPH is effectively doing it's own software version of this data redundancy across the disks/cluster. Maybe I jumped the gun on "resolving" this one.
Ceph indeed does it thing, yet, as I understand, ceph is using zfs, and zfs eats cpu power.
Looking into zfs, I ended up into a discussion between hardware raid guys, and a a ZFS guy.
The raid guys threw on the table, that you need one cpu core, per disc, and the ZFS guy did not refute that.
Then, if this is true, what machine you need to run, to be able to run a zfs node with 240 discs ? Guess you might think dual super Epyc?
I do see the plus points of ZFS anyway, yet, I do see a lot of problems that may arise, with the ZFS system, as you create a whole chain of hardware, that can fail.
Hardware raid can have a failing controller ? that's hell, true, yet, how hard is it, to get one RAID card more, so you clearly have the same avail

that's my 2 cents..
 
  • Like
Reactions: jorgemop

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!