Install Proxmox VE 8 on HP Proliant DL380E G8 with HP P420 Smart Array RAID Controller

gonzalo.garcia.ncp

New Member
Sep 5, 2023
21
1
3
Hello!

I'm trying to install Proxmox VE 8 on my HP Proliant DL380E G8 14LFF with HP P420 Smart Array RAID Controller, but it doesn't recognize the RAID controller.

I'm getting the error "No hard disk found!" Can you help me? Any idea of what I might be doing wrong?

Thank you,
Regards.
 
Hi,
You need to create arrays on the P420 raid controller and enable boot from that. But if you want to use ZFS, then it's not recommended to use hardware raid.. You can configure the p420 as an HBA to passtrough disk. I'm using the same way in the exact same server :)

Regards,
Daniel
 
Hi,
I have configured the RAID controller as an HBA and set up RAIDZ-2 in Proxmox installation. The installation has been completed successfully, but upon restarting the server, the following error appears:

1700651547705.png

Can you help me?

Thank you,
Regards.
 
Hey-o!

Wanted to chime in on this. I have had several DL380P G8 systems (mostly the 6 bay SFF) with the P420i controllers. The easiest way to use these is to remove the DVD / CD drive and use one of the laptop CD > HDD adapters (I uses these) with a cheap, small SATA SSD. Then you can use the disks on the P420 however you want (if you want to pass thru, set up datastore, etc).

Since you're using LFF, the model you're using doesn't have a CD rom slot, but check to see if there's a SATA header on the board - you should be able to get an adapter (something like this) to use it / get power to use a SATA SSD internally, separate from the P420.

Hope this helps!

EDIT: I pulled up the board diagram (https://content.etilize.com/User-Manual/1022760398.pdf) and it looks like there's a SATA port on the board - you should be able to do what I posted above! You'll have to futz around in the BIOS to configure boot order, but it will work! Oh, and you're right, those systems are BIOS only, no UEFI! Crappy HP! :p
 
Last edited:
But if you want to use ZFS, then it's not recommended to use hardware raid..
This is something that makes no sense. If it is a true HW RAID, the underlying OS and thus filesystem does not know that the block device is actually a RAID array. So why would there be an issue with ZFS? Unless you talk about possible perf issues which should also not occur, since the HW RAID controller takes care of the RAID operations, thus no CPU cycles or IO waits in the OS.

However, I have noticed that pretty much all consumer HW RAID controllers are in fact not HW RAID controllers. As soon as you require a driver for the OS, it is not a true HW raid.
Additionally it's sometimes also hard to find a proper HW RAID controller for true server HW.

I think the market gave up on HW RAID controllers. Same with ECC RAM. It is extemly complicated to buy consumer HW that supports ECC RAM, because the combination of motherboard, CPU, and of course RAM must match and work.

IMO non-ECC RAM should be forbidden and all manufacturers and HW should support and even require ECC RAM. Unfortuantely that will never happen.
 
Last edited:
This is something that makes no sense. If it is a true HW RAID, the underlying OS and thus filesystem does not know that the block device is actually a RAID array. So why would there be an issue with ZFS? Unless you talk about possible perf issues which should also not occur, since the HW RAID controller takes care of the RAID operations, thus no CPU cycles or IO waits in the OS.
One of the many features of ZFS is to detect and heal corrupt blocks (disk error, bad cable, etc.) and to prevent silent data corruption if ZFS builds the RAID. You will not have this unless you set copies to something different than 1 with hardware RAID. There fore it makes sense to NOT use hardware RAID below ZFS - and also the inventors of ZFS said so.
 
You will not have this unless you set copies to something different than 1 with hardware RAID.
Please explain why. The ZFS does not know it is on a HW raid. For the ZFS it is the same as a single disk. ZFS allocates a page on the storage. This page is the same for ZFS wether or not a HW RAID is used. ZFS sees the page, not the storage.
If you built a RAID with ZFS on a HW raid, you would basically use 2 levels of RAID. Yet again, with true HW RAID the block storage is transparent and so is the performance. I wouldn't suggest doing this, but there is no reason why you could not.

And if you use HW raid, you can just add disks (the block devices that are reported by the HW raid, which are in fact RAID arrays) to a striped pool.
 
For the ZFS it is the same as a single disk.
Yes and a single bit error ZFS reads leads to corruption. That was the system I was describing. Who would want such a system in production system instead of just using ZFS on the disks directly?


If you built a RAID with ZFS on a HW raid, you would basically use 2 levels of RAID.
Yes, but why (except 'because I can')? I cannot come up with a valid reason to do so. Do you have one?


I think the market gave up on HW RAID controllers.
Now with NVMe it's different, yet there are still a lot of SATA/SAS controller available. Maybe not in the consumer market ... I don't care about that.


Same with ECC RAM. It is extemly complicated to buy consumer HW that supports ECC RAM, because the combination of motherboard, CPU, and of course RAM must match and work.
I was under the impression that AMD has ECC support in their prosumer Ryzen chips and HCL were always a thing, so I don't get why this should be a problem.
 
Yes and a single bit error ZFS reads leads to corruption.
Hmm, but that's why there is ECC RAM. e.g. I run all my ZFS on ECC RAM systems.
Do you have one?
Nope, which is why I said that I would not recommend it.

I was under the impression that AMD has ECC support in their prosumer Ryzen chips and HCL were always a thing, so I don't get why this should be a problem.
Yes, this is true. But why do I need a PRO CPU that is so much more expensive than a regular CPU. Truth be told, according to HW designers these are artificially inflated prices. My point is that the market should ban non ECC RAM. This also would have the advantage that people would not have to research for hours and days to find out which HW supports ECC.
In almost all situations the things that can happen with non ECC RAM are bad. Even for a consumer. So why does it even exist? To make money of course, at which point we come back to the inflated prices.

I could tell you stories about HDMI and how unstable a chain of HDMI devices is. But the manufacturers and the market went ahead with these specs and flawed design anyway. Ok, I am way off-topic and will shut up now.

But yes, if we talked about a DC, I'd not care either. But I do not have a DC at home to build my Proxmox server. Nor do I want to spent $10,000 for proper HW.
 
Hmm, but that's why there is ECC RAM. e.g. I run all my ZFS on ECC RAM systems.
RAM is only one source of errors. There is also the disks itself (bit rot, energy spikes, firmware), cables, controller, mainboard and also cpu. All can have errors or you're just lucky and get hit by high energy particles. There is a nice paper about this.

But why do I need a PRO CPU that is so much more expensive than a regular CPU.
Yes, very sad indeed.
 
There is also the disks itself (bit rot, energy spikes, firmware), cables, controller, mainboard and also cpu. All can have errors or you're just lucky and get hit by high energy particles.
Ok, agreed. This makes sense. In which case having a SW raid makes more sense. Although some of those things could not be mitigated by ZFS either. e.g. let's say there is an issue with the controller that messes with all connected disks, what can ZFS do then? But I see your point. Thanks.
 
Ok, agreed. This makes sense. In which case having a SW raid makes more sense. Although some of those things could not be mitigated by ZFS either. e.g. let's say there is an issue with the controller that messes with all connected disks, what can ZFS do then? But I see your point. Thanks.
Report it, but yeah. ZFS does not solve everything, but it solves more than any other filesystem out there at the moment.
 
Hey-o!

Wanted to chime in on this. I have had several DL380P G8 systems (mostly the 6 bay SFF) with the P420i controllers. The easiest way to use these is to remove the DVD / CD drive and use one of the laptop CD > HDD adapters (I uses these) with a cheap, small SATA SSD. Then you can use the disks on the P420 however you want (if you want to pass thru, set up datastore, etc).

Since you're using LFF, the model you're using doesn't have a CD rom slot, but check to see if there's a SATA header on the board - you should be able to get an adapter (something like this) to use it / get power to use a SATA SSD internally, separate from the P420.

Hope this helps!

EDIT: I pulled up the board diagram (https://content.etilize.com/User-Manual/1022760398.pdf) and it looks like there's a SATA port on the board - you should be able to do what I posted above! You'll have to futz around in the BIOS to configure boot order, but it will work! Oh, and you're right, those systems are BIOS only, no UEFI! Crappy HP! :p
Hi,

I have installed Proxmox on an SD card, but when trying to configure the RAID controller in HBA mode to create a ZFS, it does not recognize it.

1701694551432.png

Can you give me a hand?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!