Got brand new HP Proliant DL360 Gen9 issue booting into proxmox

threetoedsloth6

New Member
Feb 7, 2024
1
0
1
Installed proxmox fine but now when the server boots just goes to PXE and does nothing tried to go to boot menu and press enter on proxmox and just sends me back to the bios area am i doing something wrong?
 
You need to provide more details on the BIOS and hardware setup and where you (think) you had installed PVE onto and the drives setup on the machine.
 
same exact thing is happening to me. during installation i chose raid10 with all four drives. i fear this is something to do with the hardware raid controller on the server, but looking to explore other avenues first.
 
U have 2 destroy HW RAID, if exist and use separate HDDs, not raided.
 
U have 2 destroy HW RAID, if exist and use separate HDDs, not raided.
server definitely came with raid controller (HP Smart Array B140i). i did not do anything with it before, during, or after installation. i only chose zfs raid10 during installation. proxmox installed correctly, however, i am unable to boot to it and do not see the any of the drives or anything resembling proxmox in the boot menu.

chrome_t5Yaf0dkjD.png


i am almost certain this is something to do with raid/raid setup. i would like to have a raid 10 setup. but if there is a better way, i am open to that. if it raid issue, think im going to need a little help working my way through it. by "destroy" is there a way i can just "ignore" it?
 
Proxmox (maybe only with ZFS) hates HW raides. So U have 2 destroy any raid volumes in raid bios of that HP server and leave all disks alone, then reinstall Proxmox and choose all disks and ZFS RAID10
 
server definitely came with raid controller (HP Smart Array B140i). i did not do anything with it before, during, or after installation. i only chose zfs raid10 during installation. proxmox installed correctly, however, i am unable to boot to it and do not see the any of the drives or anything resembling proxmox in the boot menu.

ZFS is never a good idea for this:
https://openzfs.github.io/openzfs-d...uning/Hardware.html#hardware-raid-controllers

Why not install a normal filesystem on root and leave it on the HW RAID?
 
ZFS is never a good idea to install to HW RAID.
But there are plenty of users satisfied with ZFS.
@esi_y is obviously not one of them.
But if you don't need some of ZFS features you can stay with HW RAID
 
ZFS is never a good idea to install to HW RAID.
But there are plenty of users satisfied with ZFS.
@esi_y is obviously not one of them.

:D There's performance cost to ZFS for a hypervisor. And there's troubleshooting (footgun) issues for ZFS on the root. If the OP has need for some feature of ZFS on the guests pool, by all means, go for it and follow ZFS recommendations.

They are not Proxmox VE limits, it is filesystem only related.

Yes, I have (some) issues with ZFS, I see we had met in another thread on this already:
https://forum.proxmox.com/threads/lost-all-data-on-zfs-raid10.154843/page-3#post-705651

EDIT: I just realised this was your very own thread on losing a mirrored stripe pool. :oops: Odd...

But if you don't need some of ZFS features you can stay with HW RAID

I just noticed lots of people install PVE on perfectly good systems with mature RAID cards and then try to undo all the hardware layer they already have in place (e.g. they did NOT save costs) in order to satisfy ZFS. I believe the filesystem should satisfy their needs, not user should satisfy their filesystem.

Finally, there's additional learning curve - if the OP uses wrong terms like RAID10 for ZFS, it tells me they are not familiar with the filesystem and it might be very wrong choice for them.
 
Last edited:
technically, not op... but same issue as op. ;)

but yes, very new to all of this. this is just a homelab setup to improve on that learning curve!

advance degrees and certs teach lots of theory, but to put it in practice is another thing. and youtubers only go so deep in what they explain. so, i really appreciate all of the insight!

i am not bound to zfs, it just seems like a good file system that seems to play nice with proxmox and i wanted to give it a try. i was saying zfs raid10, because that is how proxmox labels it during installation.

my IT tract was helpdesk, jr. cyber sec admin, cyber sec... so my server admin skills and networking skills are barely passable. definitely looking to improve, which is the whole purpose of this server and eventual homelab. unfortunately this server as sat for 3 years while i was working on some other projects. im ready to dive in now.

back to the op, even if the raids are competing with each other wouldnt i still be able to see something that resembles proxmox in the boot menu?

i am in the server system options now, trying to figure out how to "disable" the raid controller and then deciding if thats what i want to do.

i am sure this goes beyond a proxmox specific message board, but any advice on how to configure/setup the server for a clean install of proxmox is welcomed. just remember to take it easy - im kind of smart, but also kind of dumb. :D

i have no real end goal except to learn as much as possible.

thanks!
 
but yes, very new to all of this. this is just a homelab setup to improve on that learning curve!

Then why the RAID? :) I mean on the root. I would install PVE as ordinary as possible and then create e.g. just extra ZFS pool (for guests) to experiment on.

i was saying zfs raid10, because that is how proxmox labels it during installation.
The developers must be secretly using something else themselves, so never bothered with picking up ZFS designation. ;)

if the raids are competing with each other wouldnt i still be able to see something that resembles proxmox in the boot menu?

Yes you should, one of the "Linux Boot Manager" entries was probably it ... but ZFS on Linux root is a bit of a hack and PVE needs entire own tool to go around it: https://pve.proxmox.com/wiki/Host_Bootloader

Something I can't really troubleshoot, by maybe someone else will?

i am in the server system options now, trying to figure out how to "disable" the raid controller and then deciding if thats what i want to do.

Page 6 at the bottom:
https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c04441385

i am sure this goes beyond a proxmox specific message board, but any advice on how to configure/setup the server for a clean install of proxmox is welcomed. just remember to take it easy - im kind of smart, but also kind of dumb. :D

No worries, people ask lots of non-strictly-PVE topics here. BTW Putting e.g. hardware / controller name into title could help get more (valid) responses.

On a second thought, do NOT edit your posts, with a new account on this forum, when you start making editing, it marks you as spam and you will be waiting sometimes till next day till they manually get approved.

i have no real end goal except to learn as much as possible.

Time to do everything wrong in the first round, then! :D
 
@esi_y I can't named myself as a friend of ZFS but I am not an enemy neither. As I wrote before I have dozens PVE, PBS runing with ZFS. Some of them are using ZFS only (for root too). I have one installation on HP too (with disabled of HW RAID functionality, because I was curious what happened). I have many installation using 2 or 4 HDDs (Proxmox says ZFS RAID1 or RAID10). I have a couple of installation where I have signle NVME for OS and 4x SSD for ZFS DATA pool.
ZFS failed me only in that one case so far (knocking on the wood).
I can imagine I don't need ZFS on server with HW RAID. But not sure what to used for nonHW RAID scenario, cos Proxmox have some issues with SW RAIDs. I saw some guy on YT with benchmarks, but according to his measurements the worst is BTRFS (and slowest too).
Anyway in this case scenario I would preffer of using HW RAID too (without ZFS) if it is already a part of HP Server.
But will be snapshots of VMs working with LVM or LVM-thin is needed to be used ?
 
@esi_y I can't named myself as a friend of ZFS but I am not an enemy neither.

No worries, I do believe I am alright with ZFS because I have used it, I just do not like it for PVE-like scenario, especially not on root. I simply comment on these threads so that people know it's not just one side of the coin - there's enough pro-ZFS bias here. That's all.

Proxmox have some issues with SW RAIDs

Do you mean mdadm or LVM? What issues exactly? (I always ask, I just want to know - I am NOT implying it is incorrect.)

I saw some guy on YT with benchmarks, but according to his measurements the worst is BTRFS (and slowest too).

Can you share a link? Just a note, I am not pro-BTRFS or anything like that, I believe I only said in other places that if one needs copy-on-write, might as well do BTRFS in 2024.

Anyway in this case scenario I would preffer of using HW RAID too (without ZFS) if it is already a part of HP Server.
But will be snapshots of VMs working with LVM or LVM-thin is needed to be used ?

When you have a regular HW RAID, the OS does not even know it really, it appears as a drive to it. Even ZFS will work on that, it is just counterproductive (for the reasons I linked above).
 
When you have a regular HW RAID, the OS does not even know it really, it appears as a drive to it. Even ZFS will work on that, it is just counterproductive (for the reasons I linked above).

I forgot to add, for @moreramneeded, I somehow do not think it is RAID or ZFS related that you cannot boot, but I just do not really want to find myself troubleshooting ZFS booting of PVE. :D If you are learning, you might as well do ZFS, you will learn more. But I would still make non-ZFS install then create ZFS pool for guests.

Your boot entry is likely one of those Linux Boot Manager ones.
 
I forgot to add, for @moreramneeded, I somehow do not think it is RAID or ZFS related that you cannot boot, but I just do not really want to find myself troubleshooting ZFS booting of PVE. :D If you are learning, you might as well do ZFS, you will learn more. But I would still make non-ZFS install then create ZFS pool for guests.

Your boot entry is likely one of those Linux Boot Manager ones.
Well I am not sure. I assume that he has soem kind of logical volume on the HW RAID, but it seems that PVE installer saw all disks instead of LV, cos he has too many options for booting.
 

I referenced this very thread from this BZ [1] regarding needed documentation. For me, it's not an issue, I can't make them have something else in their installer than they want to (or call ZFS vdev configurations correctly), but it does not really worry me as there's other things which are equally important not part of the default installer (e.g. LUKS). So if you can live with that it is not "out-of-the-box" and cache=none, MDRAID is working just fine.


Alright, unfortunately he does NOT mention what exact (other than "SQLite") benchmark he was using, but just to explain it from my perspective - BTRFS is e.g. default on Fedora install, if you install OS and need a copy-on-write filesystem, it works really nice, I could literally run some benchmark to compare (the comparable). I know there were issues with e.g. RAID5/6 with BTRFS, but is not what I would run (same for RAIDZs), I think of it more as a filesystem where one needs copy on write and perform and be straightforward to use on Linux.

Now should PVE root be on BTRFS? I do not see much point, it is not something I need snapshots from, but the same is true for ZFS (on root).

When it comes to BTRFS as a choice in PVE for guests, I actually prefer QCOW2 for VMs (this is not PVE specific preference, anything QEMU/KVM, really). If you have QCOW2, it is counterproductive to have it on anything copy-on-write (BTRFS or ZFS), you can use its snapshots (which are superior). BTRFS does not have equivalent of ZVOLs. But for me on ZFS, they are buggy, so I am back to QCOW2 (or RAW) on ordinary dataset.

When it comes to BTRFS as a choice for LXCs, it works really well, but I do not have a benchmark (this would be comparing ZFS ordinary dataset with ordinary subvolume, something that actually makes sense).



Regarding benchmarks however, did you listen to the last 1 minute on the said video? It is important because besides it explains why ZFS wastes so much RAM, it also covers why the benchmark could be skewed.

EDIT: Just as I posted this reply I get this in my inbox [2] ... talking of ARC ... what a coincidence. See, I am not an enemy after all... ;)

EDIT2: Just want to reference BTRFS subvolumes explained here [3], in cases it got confusing because unfortunately every setup uses some clashing terms, unlike with LVM, they are not block devices (PVE on top calls subvolumes what it stores on ZFS ZVOLs).


[1] https://bugzilla.proxmox.com/show_bug.cgi?id=5235
[2] https://zfsonlinux.topicbox.com/groups/zfs-discuss/T46ac45cacf647028-Md75f9c2bc40d7b3ae540e932
[3] https://btrfs.readthedocs.io/en/latest/Subvolumes.html
 
Last edited:
I referenced this very thread from this BZ [1] regarding needed documentation. For me, it's not an issue, I can't make them have something else in their installer than they want to (or call ZFS vdev configurations correctly), but it does not really worry me as there's other things which are equally important not part of the default installer (e.g. LUKS). So if you can live with that it is not "out-of-the-box" and cache=none, MDRAID is working just fine.
https://pve.proxmox.com/wiki/Software_RAID

It seems that ZFS is only official option for RAID by Proxmox staff
 
https://pve.proxmox.com/wiki/Software_RAID

It seems that ZFS is only official option for RAID by Proxmox staff

EDIT: References

I have to chuckle when I see the edits though:
https://pve.proxmox.com/mediawiki/index.php?title=Software_RAID&diff=next&oldid=2830

Thu May 8 11:37:11 CEST 2008
We initially had software raid, but removed support because it is to
difficult to recover after craches.

- Dietmar
Source: https://lists.proxmox.com/pipermail/pve-user/2008-May/000015.html

And: https://forum.proxmox.com/threads/new-features.398/#post-1610

EDIT2:
So summary: In 2008, it was thought the users were too inept to read instructions and with all the rest (LVM, etc.) it would be just more complicated for them to support. Then ZFS came, so now got stuck with that. I would go as far as saying that as the next stepping stone was to support CEPH, the non-converged setups will be stuck with ZFS-only forever, at least as far as PVE installer is concerned.

It looks like someone does not like mdadm, but these statements (on the wiki) are quite often self-serving at worst or unsubstantiated at best. E.g. any filesystem can be used with dm-integrity [1].

I don't know what you expect me to say, it's like this with everything here, e.g. first I read several times on the forum that installing on top of Debian was not supported, then later once everyone with NVIDIA comes that they cannot even install, it is supported installation method.

As you can see these wikis are as stable as a wike can be.

[1] https://docs.kernel.org/admin-guide/device-mapper/dm-integrity.html
 
Last edited:
@jhr On a separate note (not trying to change the topic, but this is just logical chain) ... why do you need RAID for hypervisor? High availability? That's what clusters are for. Checskumming? That's not RAID dependent. Disaster recovery? That's what backups are for.

My point being, the most straightforward install, I believe, is on e.g. XFS single drive (each node in the cluster), then use some shared storage for guests & HA.

EDIT: You also overlooked LVM mirroring, if you insist something simple and already "on the box".

But I suspect when you read through the wiki, also on ZFS, you realise that 10 years ago someone at Proxmox believed it will one tool for it all ... RAIDs, replication, snapshotting (LXCs), ZVOLs ... it would have worked out beautifully if marketing reflected reality. Then think again, when will e.g. Red Hat support ZFS? :)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!