Can't install Proxmox on HP Proliant DL380 Gen9

Delex

New Member
Jan 12, 2015
1
0
1
Hello,
we just bought 3 HP Proliant DL380 Gen9 on which we would like to install and purchase Proxmox but we are not able to do it. During the setup Proxmox can't detect our 3 SATA discs that are plugged on the RAID Controller HP SmartArray P840 . The only one partition that detects is the iLO which is ~512 mb instead 3x3TB. This happens We already switched BIOS from UEFI to Legacy mode.

We tried Proxmox 3.1 and 3.3.

Is there anything that could we do?

Thank you in advance!

BR

Alex
 
Have you tried to disable the SmartArray Controller? I installed proxmox 3.3 on a HP DL180 Gen 9 server (with an unsupported raid controller) in Legacy mode using "SATA AHCI mode".

I also installed a DL 360 Gen9 with a supported controller. I posted some info about that in the proxmox installation forum last month (http://forum.proxmox.com/threads/19...ng-On-and-Secure-Boot-Off?p=104765#post104765)

Good luck,

Manuel Martínez

PD: I've found this info related to HP Gen9 servers and Ubuntu (http://h20564.www2.hp.com/hpsc/doc/public/display?docId=emr_na-c04533283&sp4ts.oid=7271241). In case your controller is not supported you should disable it using Sata ahci mode...
 
Last edited:
Have you tried to disable the SmartArray Controller? I installed proxmox 3.3 on a HP DL180 Gen 9 server (with an unsupported raid controller) in Legacy mode using "SATA AHCI mode".

I also installed a DL 360 Gen9 with a supported controller. I posted some info about that in the proxmox installation forum last month (http://forum.proxmox.com/threads/19...ng-On-and-Secure-Boot-Off?p=104765#post104765)

Good luck,

Manuel Martínez

PD: I've found this info related to HP Gen9 servers and Ubuntu (http://h20564.www2.hp.com/hpsc/doc/public/display?docId=emr_na-c04533283&sp4ts.oid=7271241). In case your controller is not supported you should disable it using Sata ahci mode...

Just checking in on the status of this one. Has anyone successfully installed proxmox onto a HP gen9 running either the P440 or P840?

Looks like the latest is

version: 3.4.0-1-RH1

I think I need 3.4.4_1 or higher.
 
Last edited:
Hello,
we just bought 3 HP Proliant DL380 Gen9 on which we would like to install and purchase Proxmox but we are not able to do it. During the setup Proxmox can't detect our 3 SATA discs that are plugged on the RAID Controller HP SmartArray P840 . The only one partition that detects is the iLO which is ~512 mb instead 3x3TB. This happens We already switched BIOS from UEFI to Legacy mode.

We tried Proxmox 3.1 and 3.3.

Is there anything that could we do?

Thank you in advance!

BR

Alex


Try pulling out all drives except the disk you want to install on. I just installed on G6 and it wouldn't install on my raided install disk until I pulled other drives out.

Hope that helps.
 
Have you tried to disable the SmartArray Controller? I installed proxmox 3.3 on a HP DL180 Gen 9 server (with an unsupported raid controller) in Legacy mode using "SATA AHCI mode".

I also installed a DL 360 Gen9 with a supported controller. I posted some info about that in the proxmox installation forum last month (http://forum.proxmox.com/threads/19...ng-On-and-Secure-Boot-Off?p=104765#post104765)

Good luck,

Manuel Martínez

PD: I've found this info related to HP Gen9 servers and Ubuntu (http://h20564.www2.hp.com/hpsc/doc/public/display?docId=emr_na-c04533283&sp4ts.oid=7271241). In case your controller is not supported you should disable it using Sata ahci mode...

Sorry for the old thread resurrection, but we're trying to install PVE 4.3 onto an HP DL380 Gen9 (via Debian packages due to some specific network & software RAID config).

The server is in legacy BIOS mode, and the P440ar Smart Array controller is in HBA mode as we want to use ZFS if possible.

The system installs fine, but then won't boot from the hard disk. The option to enable SATA AHCI support is greyed out in BIOS.

The option to enable SATA AHCI is available if we switch to UEFI but the system still wasn't booting - it's possible we were doing something wrong with the UEFI setup as no-one on the team has used it before.

The system will boot from a rescue disk and everything's present on the filesystem when it does. So this feels like a boot loader/boot disk problem.

So, question: has anyone managed to get a DL380 (or other Gen9 server) successfully booting without using the hardware RAID?
 
I've installed PVE 4.3 with ZFS on DL160 gen9 server without problems. But I think that I needed to tweak some bios settings in regards the legacy bios and B140i controller. But looks like you've done this already.
 
Sorry for the old thread resurrection, but we're trying to install PVE 4.3 onto an HP DL380 Gen9 (via Debian packages due to some specific network & software RAID config).

The server is in legacy BIOS mode, and the P440ar Smart Array controller is in HBA mode as we want to use ZFS if possible.

So, question: has anyone managed to get a DL380 (or other Gen9 server) successfully booting without using the hardware RAID?
We run 4.3 on DL360 Gen9 booting in UEFI mode but w/Hardware raid no problem. Why use HBA mode?
Why not benefit from your smartarray controller say write-cache and read-ahead?
If you don't want to use HW raid, then just make 1-1 logical 2 physical drive mapping.
 
We want to use HBA mode because we want to use ZFS. And the recommended way to do that is to have ZFS manage the disks directly.

I couldn't work out a way to do that and have the server actually boot, at least not easily. So for now I'm using the the P440ar Smart Array controller with all the data disks passed to ZFS as RAID 0 volumes. OS/boot volume is RAID1. Maybe not ideal, but it's worked OK so far.
 
We want to use HBA mode because we want to use ZFS. And the recommended way to do that is to have ZFS manage the disks directly.
Think this merely refer to 'Do not use some kind of volume manager between device and ZFS' not that your couldn't use a smart controller and it's possible write cache.

I couldn't work out a way to do that and have the server actually boot, at least not easily. So for now I'm using the the P440ar Smart Array controller with all the data disks passed to ZFS as RAID 0 volumes. OS/boot volume is RAID1. Maybe not ideal, but it's worked OK so far.
Yes what I meant, single raid0 'volumes'/disk, think this is ideally ImHO
 
Think this merely refer to 'Do not use some kind of volume manager between device and ZFS' not that your couldn't use a smart controller and it's possible write cache.

From what I know, you should use either HBA or JBOD for ZFS. Otherwise, all bets are off.
 
From what I know, you should use either HBA or JBOD for ZFS. Otherwise, all bets are off.
Consider your controller part of your drive(s), this a like a HBA/JBOD ImHO. Think not bets are of, ZFS can also be run across/on top of HW raided device(s) if desired, like SAN LUNs eta. no problem, done that for large Oracle rDBMSes.
 
Consider your controller part of your drive(s), this a like a HBA/JBOD ImHO. Think not bets are of, ZFS can also be run across/on top of HW raided device(s) if desired, like SAN LUNs eta. no problem, done that for large Oracle rDBMSes.
You might have been doing this and luckily avoided any problems but that does not change the fact that what you are trying to make others do is plain wrong and against everything recommended by the ZFS developers and the ZFS community.
 
You might have been doing this and luckily avoided any problems but that does not change the fact that what you are trying to make others do is plain wrong and against everything recommended by the ZFS developers and the ZFS community.
:) as always, know what you are doing/dealing with. See more on:
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes, having a NVRAM backed controller doesn't break ZFS, but it might be inefficient in some cases:

'Contact you storage vendor for instructions on how to tell the storage devices to ignore the cache flushes sent by ZFS.

If you are not able to configure the storage device in an appropriate way, the preferred mechanism is to tune sd.conf specifically for your storage. See the instructions below.

As a last resort, when all LUNs exposed to ZFS come from NVRAM-protected storage array and procedures ensure that no unprotected LUNs will be added in the future, ZFS can be tuned to not issue the flush requests by setting zfs_nocacheflush.'
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!