Got brand new HP Proliant DL360 Gen9 issue booting into proxmox

When it comes to BTRFS as a choice in PVE for guests, I actually prefer QCOW2 for VMs (this is not PVE specific preference, anything QEMU/KVM, really). If you have QCOW2, it is counterproductive to have it on anything copy-on-write (BTRFS or ZFS), you can use its snapshots (which are superior). BTRFS does not have equivalent of ZVOLs. But for me on ZFS, they are buggy, so I am back to QCOW2 (or RAW) on ordinary dataset.

So yes, confirmed, on plain install now upon selecting BTRFS, it gets you BTRFS subvolumes for VMs and the disks are stored there as raw files, e.g. var/lib/pve/local-btrfs/images/100/vm-100-disk-0/disk.raw ... it is this setup the YT video maker was benchmarking I believe.

I actually believe the bad results, it is after all copy on write filesystem. What I do not believe are the very good benchmark of ZFS - they must be on par (both much worse than regular filesystem).
 
I do not need RAID for Proxmox OS, cos it's easy to reinstall (although the less downtime is better). But I would like to have all customers data on some kind of RAID1 or RAID10. I did some tests a couple of years ago with mdadm raids on some HW with differrent FS and I realized RAID10 is very nice for redundancy and speed of reading and writing. So I want RAID for data cos I fear about HDD failure. I saw many wrecked HDDs in my life. Of course am doing backups, but this is not enough. Of course RAID is much cheaper solution than build HA cluster. Don't you think ? LVM RAID is slowest than mdadm RAID, I want some speed, I don't want to have data on some kind of slow shared storages.
 
I do not need RAID for Proxmox OS, cos it's easy to reinstall (although the less downtime is better).

I do not want to entirely detract from the original topics, but if there is any shared storage in the network that is already HA, I would encourage anyone happy to manage a system like it was a normal Debian (which it is) to PXE boot, i.e. have the host diskless entirely. There's only one location that should be mounted locally (I would keep it regular GPT partition, EDIT: Or even better, ramdisk!) and that's /var/lib/pve-cluster . That way, you will never have downtime from a failed OS disk. And for OS, it does not matter it is "slow".

But I would like to have all customers data on some kind of RAID1 or RAID10. I did some tests a couple of years ago with mdadm raids on some HW with differrent FS and I realized RAID10 is very nice for redundancy and speed of reading and writing. So I want RAID for data cos I fear about HDD failure. I saw many wrecked HDDs in my life. Of course am doing backups, but this is not enough.

That's a legitimate concern, what I do not understand is if you actually can run your own mdadm, how did you get convinced that somehow ZFS was superior or unusable. I can only guess it's the wiki pages fault. Then when you look at the BZ, it is clear that they just do not want to provide support for it. I am saying this now that you have experienced ZFS pool equivalent to RAID10 gone on reboot. How is that satisfying the requirement of either redundancy or availability?

LVM RAID is slowest than mdadm RAID, I want some speed, I don't want to have data on some kind of slow shared storages.

I agree, but was ZFS (or BTRFS) any faster? By definitition it is impossible.

Of course RAID is much cheaper solution than build HA cluster. Don't you think ?

I believe the reason for all these choices is actually cost. Not just financial cost, but cost of supporting something. E.g. Proxmox do not want to support to many configurations (it's easier to train and feel comfortable e.g. new employee to push and provide support ZFS or CEPH). It is likely saving support load, that saves valuable resources and that allows them to provide the solution for low cost (there's some even without support).

This cost part also probably is reflected further yet, e.g. why do you choose even PVE if you do not need a cluster? You can have e.g. just libvirt with cockpit or incus (which does support qemu guests since a while) with the rudimentary but neat GUI from Canonical. I will not mention the others, which are then more complex to setup (that's added cost on your side).

PVE has its place where clusters are needed, it fits in that niche. There must be features users go for for non-cluster as well, but I do not know about them (GUI?). I would find it very comical of that feature was ZFS, i.e. somehow ZFS was such a marketing success, people now go for PVE (for non-cluster even) because it supports ZFS because they believe it is somehow superior because they were told to, by Proxmox.

Or it must be just the popularity and then e.g. ability to ask on a forum like this.

EDIT: When you look at other posts of this kind [1] (GlusterFS support), it's a similar recurrent topic. Proxmox chose one over the other to neatly integrate (e.g. CEPH). It is a business decision, not a technical one. It is alright with me, just maybe I would stop trying to discredit the other valid choices on the wiki.

EDIT2: And talking of cost, you would not neet to follow silly recommendations in 2024 about PLP SSDs if you were NOT using ZFS, also could use commodity SSDs without shredding through their TBW like nothing. It makes no sense to first throw away RAID controllers with batter backups and all that then push SSDs for double cost and put them into mirrors for quadruple possible cost. I might as well have another node for that cost with commodity hardware, with higher availability and redundancy overall. Also, without copy-on-write filesystem that eats your RAM, you squeeze more customers on the same hardware with same or better performance.

[1] https://forum.proxmox.com/threads/c...sterfs-reachability.154087/page-4#post-707679
 
Last edited:
  • Like
Reactions: waltar
Why I am not using mdadm with Proxmox and go for ZFS ? I think it's easy, because Proxmox installer not support mdadm and warned users to not using mdadm and offer ZFS during an installation process. And until now I don't have any big troubles with ZFS. When one of my four HDDs wrecked, I replaced it without any problems and without any data loss (just like mdadm).

PVE has its place where clusters are needed, it fits in that niche. There must be features users go for for non-cluster as well, but I do not know about them (GUI?).
My primary using of PVE is not for Cluster (I really doubt if any user using PVE for Clustering in the first place), but for Virtualization of course :) I moved from physial servers to xen a then from xen to PVE.

So maybe I am an old school boy, but I would like to have at least two HDDs in mirror (for sure) and all data mirrored of course (to prevent HW failure).

And I don't want to use some kind of slow network storages, cephs, etc., because I need high reads/writes.

Anyway @threetoedsloth6 is not responding at all.
 
Why I am not using mdadm with Proxmox and go for ZFS ? I think it's easy, because Proxmox installer not support mdadm and

That's correct, but then it also does not support LUKS encryption, so are you e.g. keeping your customer data plain on the drives just because installer from Proxmox does not care about them?

warned users to not using mdadm

From the prior posts, this is not the case, they just do not want to provide support on it. If you don't have support anyways, it is really your decision.

and offer ZFS during an installation process. And until now I don't have any big troubles with ZFS. When one of my four HDDs wrecked, I replaced it without any problems and without any data loss (just like mdadm).

Well, it's 1:0 so far in favour of mdadm from your last experience.

My primary using of PVE is not for Cluster (I really doubt if any user using PVE for Clustering in the first place), but for Virtualization of course :) I moved from physial servers to xen a then from xen to PVE.

So what is wrong with libvirt for a single machine? Or is it just not popular enough? But that would then become off-topic on this forum. :) My point is mostly that if you e.g. install Fedora, you get default BTRFS and you get cockpit, you will as well be able to login to port 9090 (I think) and do some administration with the VMs there. That is also out of the box and unlike Proxmox, it is not a rolling release, so more stable. If you are not going to be artificially running (for such deplyoment) useless services like pve-cluster. Don't get me wrong, I am not selling anything, but just explore, compare?

So maybe I am an old school boy, but I would like to have at least two HDDs in mirror (for sure) and all data mirrored of course (to prevent HW failure).

That's fine too, my only point was that we were all doing this long before ZFS just fine.

And I don't want to use some kind of slow network storages, cephs, etc., because I need high reads/writes.

Then I would +1 for NON-CoW filesystem. ;)

Anyway @threetoedsloth6 is not responding at all.

Actually he is long gone, it was hijacked by @moreramneeded (now that I look at the userid, I wonder if ZFS is the reason;)) and then by us / me?

I think he might even have another issue there, he might got PVE installed as BIOS boot by accident, now the EFI gives him only previous Linux bootloaders from NVRAM of EFI?
 
you two have been awesome! yesterday got away from me, so i have not been able to test. i was going to go through and reply to specific topics you each made, but that seems fruitless with everything that has been discussed. i will just try to bullet a few of them:
  1. i thought that one of the "linux boot manager" would work as well. all they did was take me to systems utilities menu. i also tried the "embedded cd/dvd rom : dynamic smart array b140i - sata optical drive 1" option. i forget exactly where that took me, but it did not work either.
  2. i was actually reading that hpe doc yesterday right before you sent it. i was thinking like @jhr that there might be logical drives already configured. i also was going to try and disable the hw raid controller. when i was looking for logical drives and when trying to disable the hw raid controller the server would hang up on this screen (slightly different for each operation, but basically the same), so i am worried this might be a server issue: 1727801031214.png
  3. i think i am going to try just installing without the zfs option and see how that goes - i will definitely report back (for your feedback, to help op, and help any future individuals in this situation).
  4. you two really started going down the filesystem/raid rabbit hole. id like to explore that more, especially btrfs.
  5. i was choosing raid 10 as well for redundancy and speed. i bought the server 3 years ago, refurbished so i wanted to make sure to have some failure protection in case it crapped out and destroyed a drive in the process. unfortunately it sat for 3 years, so now the warranty is long gone. hoping this doesnt come back to bite me.
  6. being in the field i am in, i wanted to do full-disk encryption. in my research it looks like it is possible through lucks, which is proxmox built on top of a debian install. in the proxmox install documentation, they advise against this because it is easy to jack up (so they say). i wanted to keep things simple, get things up and running for proof of concept, and get the easy W to start this whole homelab adventure. with the intention of going back and rebuilding properly to have full-disk encryption.
thats it for now. ill keep you posted on how it goes, and i look forward to your advice on the above in the meantime!
 
so i am worried this might be a server issue: View attachment 75555

Don't worry, that means a boot loader can't find/see/mount init. It could be because of ZFS. Feel free to try without ZFS. If you will go without ZFS, u can safely create a logical volume or many volumes in HW RAID BIOS and then select inside Proxmox install.
 
you two have been awesome! yesterday got away from me, so i have not been able to test.

Glad you did not run away because of me. :D

when i was looking for logical drives and when trying to disable the hw raid controller the server would hang up on this screen (slightly different for each operation, but basically the same), so i am worried this might be a server issue

This is excellent problem to have, you are booting, your bootloader is there, but because you were reconfiguring the BIOS/UEFI, it simply might not be able to find where to load the system from.

id like to explore that more, especially btrfs

Just for the record / clarity, I am not here to push BTRFS, in my view you have 2 things to consider: KVM/QEMU machines and LXC containers. The former needs disk image and I really e.g. prefer QCOW2 format for those. Putting that on any CoW filesystem (ZFS or BTRFS) in my view is nuts. The RAW format - according to the YT benchmark - has performance penalty on BTRFS, but I suspect also ZFS has the same penalty. By definition RAW should perform fine on LVM. For LXCs, if you want snapshots, you will want either BTRFS or ZFS or LVM.

being in the field i am in, i wanted to do full-disk encryption

The easiest indeed is to do LUKS within Debian installer, so basically follow:
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm

But then keep in mind you are installing Debian, not Proxmox, so no ZFS on root option, but nicer options for partitioning (including the said LUKS).

in the proxmox install documentation, they advise against this because it is easy to jack up (so they say).

Then again, they are able to mess up their own supported technologies, for instance the bootloaders (something you are getting intimate with now) are already a topic of its own:
https://forum.proxmox.com/threads/legacy-bios-instead-of-uefi.154099/#post-701472

i wanted to keep things simple, get things up and running for proof of concept, and get the easy W to start this whole homelab adventure.

Debian installer is pretty straighforward, at the end of the day, Proxmox VE is nothing else than Debian with lots of scripting + so-called customer kernel (read: from Ubuntu) so that it supports ZFS and LXC well (Ubuntu supports them, so they build good kernels for that).

rebuilding properly to have full-disk encryption

How do you plan to unlock the boot drive? Passphrase? Key on USB? SSH?

I see this now:
Don't worry, that means a boot loader can't find/see/mount init. It could be because of ZFS.

I did not want to say this. ;)

u can safely create a logical volume or many volumes in HW RAID BIOS and then select inside Proxmox install

That and then also a hack in the PVE installer is to (if I remember well) select advanced on the filesystem choicelist and give the OS something like <10GB only. This will prevent it to create any extra partitions (no LVM thin, no nothing) for the guests. It is then completely up to you how you partition that. I think it's a good start for benchmarking without reinstalling everything all of the time.
 
U can leave Proxmox FS unencrypted and used LUKS inside all VMs.

Yeah but the the "full" is just full virtual disk encryption which results I believe in entirely virtual encryption as well. I mean, do not get me wrong, he could still possibly have keys off the drive (e.g. USB), but generally it is not a production habit I would keep.
 
U can leave Proxmox FS unencrypted and used LUKS inside all VMs.
the plan is for full-disk and also guest os level encryption. probably overkill, but it makes me feel better. :)

Don't worry, that means a boot loader can't find/see/mount init. It could be because of ZFS. Feel free to try without ZFS. If you will go without ZFS, u can safely create a logical volume or many volumes in HW RAID BIOS and then select inside Proxmox install.
that makes me feel a little more confident. this is probably a stupid question, but i can create the logical volumes after the proxmox install, or do i have to do this before the install?

Glad you did not run away because of me. :D
not at all. silly as it sounds, i feel like im getting advice from two friends - extra helpful coworkers at the very least.

This is excellent problem to have, you are booting, your bootloader is there, but because you were reconfiguring the BIOS/UEFI, it simply might not be able to find where to load the system from.
great! so like i said above, this makes me feel much more confident. it sounds like i can just do an install of proxmox on a single drive. head back to bios and create logical drives afterwards using the hw raid controller?


Just for the record / clarity, I am not here to push BTRFS, in my view you have 2 things to consider: KVM/QEMU machines and LXC containers. The former needs disk image and I really e.g. prefer QCOW2 format for those. Putting that on any CoW filesystem (ZFS or BTRFS) in my view is nuts. The RAW format - according to the YT benchmark - has performance penalty on BTRFS, but I suspect also ZFS has the same penalty. By definition RAW should perform fine on LVM. For LXCs, if you want snapshots, you will want either BTRFS or ZFS or LVM.
this one is more dynamic. i think the best way to describe it is there will be 3 separate functions i am looking to use server for.
  1. home use / projects. self-hosting. smart home. different networking functions. plex. etc. this will probably be a mix of containers and and kvms (mix of windows and linux).
  2. security/pen testing. creating isolated vm network to sharpen skills. i see this being mostly kvms.
  3. eventually would like to build a business and hosting those functions on here. assuming this will be a mix of containers and kvms.
as my knowledge and these projects grow i will be looking to add more nodes (or is it clusters - another server hosting proxmox) to my setup to test HA and other corporate situations. and as that grows, i am looking to build a few setups geographically at friends/family members' houses (and business locations, if the idea takes off) for redundancy and availability.

The easiest indeed is to do LUKS within Debian installer, so basically follow:
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm

But then keep in mind you are installing Debian, not Proxmox, so no ZFS on root option, but nicer options for partitioning (including the said LUKS).
this next statement fits just about everything else, but is especially true here: my problem is i am a perfectionist and i like to have everything planned out before getting started. and with this many moving parts, i just need to start putting one foot in front of the other. i want to do really complex things with my setup, but since this is a learning device, i need to take baby steps. get the satisfaction from doing the easy things right and then build from there. its going to be more work in the end, but ill probably learn more (and faster) this way. dont worry, im sure we'll see each other in some future thread about getting proxmox running on top of debian! ha.

Then again, they are able to mess up their own supported technologies, for instance the bootloaders (something you are getting intimate with now) are already a topic of its own:
https://forum.proxmox.com/threads/legacy-bios-instead-of-uefi.154099/#post-701472
may loop back to this. reading some of the posts on that thread, i noticed about grub. im certain thats what my server was using because it said "welcome to grub," or something to that effect. i wonder if i try a normal install if that will happen? i am very curious to see.

on that last note, sorry to the op (and all future people with similar issues). i promise i will try the advice given in this thread and confirm/deny the results eventually!

Debian installer is pretty straighforward, at the end of the day, Proxmox VE is nothing else than Debian with lots of scripting + so-called customer kernel (read: from Ubuntu) so that it supports ZFS and LXC well (Ubuntu supports them, so they build good kernels for that).
agreed. i will circle back to this (hopefully) in the near future.

How do you plan to unlock the boot drive? Passphrase? Key on USB? SSH?
i currently have easy access to the server, although i plan to do most of the management through ilo (server level) and browser (os level). see above for future plans. i am open to advice on the means you think is best. i would also like to couple it with a phishing resistant MFA.

I did not want to say this. ;)
please, please, please let it be this easy!

That and then also a hack in the PVE installer is to (if I remember well) select advanced on the filesystem choicelist and give the OS something like <10GB only. This will prevent it to create any extra partitions (no LVM thin, no nothing) for the guests. It is then completely up to you how you partition that. I think it's a good start for benchmarking without reinstalling everything all of the time.
say this again, like im five years old please.

Yeah but the the "full" is just full virtual disk encryption which results I believe in entirely virtual encryption as well. I mean, do not get me wrong, he could still possibly have keys off the drive (e.g. USB), but generally it is not a production habit I would keep.
production best practices can be thrown out the window for now. looking for proof of concept that i can build on.


looking forward to the next round of replies! my hope is to take another crack at proxmox sometime tonight. thanks again!
 
that makes me feel a little more confident. this is probably a stupid question, but i can create the logical volumes after the proxmox install, or do i have to do this before the install?

Two possibilities, U can install Proxmox on a single drive (not part of any RAID) after U can make a volumes in BIOS for data or U will have one or more volumes prepared in BIOS before an installation begins and install Proxmox OS to a logical volume.
 
  • Like
Reactions: esi_y
the plan is for full-disk and also guest os level encryption. probably overkill, but it makes me feel better. :)

If you already made up your mind for LUKS, you should be basically installing Debian now, not using Proxmox VE installer. But realistically, you will be probably reinstalling the whole thing all over (this is possible without sacrificing the guests, if well partitioned), so you can skip this for now.

that makes me feel a little more confident. this is probably a stupid question, but i can create the logical volumes after the proxmox install, or do i have to do this before the install?

My take on this is that if I have all the drives installed and know how I will want to use them, I set the HW RAID first - at least it avoids confusion later on, the OS will then see everything the same from the first boot onwards.

great! so like i said above, this makes me feel much more confident. it sounds like i can just do an install of proxmox on a single drive. head back to bios and create logical drives afterwards using the hw raid controller?

I am a bit unsure what you mean here, because originally you started with 4 drives 2+2 mirror. If you want to keep that setup in the HW RAID, then obviously you want to have that set up first. If you want to install OS on single drive and then create some RAID with the remaining 3, that will work as well, obviously.

as my knowledge and these projects grow i will be looking to add more nodes (or is it clusters - another server hosting proxmox) to my setup to test HA and other corporate situations. and as that grows, i am looking to build a few setups geographically at friends/family members' houses (and business locations, if the idea takes off) for redundancy and availability.

So that the thread covers one thing at a time, I will just make a few remarks here: With a single server (node) you really do not make use of HA all that much. If you want to test Proxmox VE with a single machine like this, for learning, you would have to virtualise the cluster. I would virtualise it on something else than PVE, just to save myself confusion. If you meant HA for the drives, that's fine. The other thing is, you cannot use Proxmox VE for redundancy per se in terms of cluster where the nodes are not in one place:
https://forum.proxmox.com/threads/high-latency-clusters.141098/

my problem is i am a perfectionist and i like to have everything planned out before getting started.

I am pruning it for brevity, but I am quite sure you WILL change your plans once you start testing, so this approach is not as productive as you think. Keep backups from the very beginning, think of them in terms of planning with the capacity as well.

i noticed about grub. im certain thats what my server was using because it said "welcome to grub," or something to that effect. i wonder if i try a normal install if that will happen? i am very curious to see.

The PVE installer uses GRUB in all cases except when EFI install without SecureBoot and other than ZFS - that's the simplest way I can say it. :) Debian uses GRUB no matter what, BTRFS/LVM & LUKS or pure XFS/ext4 on GPT, bootloader is GRUB from Debian installer.

i plan to do most of the management through ilo (server level) and browser (os level). see above for future plans. i am open to advice on the means you think is best. i would also like to couple it with a phishing resistant MFA.

For future, you may want to look up Tang & Clevis in relation LUKS unlocking.

say this again, like im five years old please.

In PVE installer, there's very limited options for partitioning, basically you just choose a filesystem. But there's an advanced tick there to specify extra parameters. It does not allow you to set all that much, but - in "Advanced LVM configuration options", see:
https://pve.proxmox.com/wiki/Installation#installation_installer

You can set hdsize to low value to keep most of it unpartitioned.

And you will basically not get anything beyond basic partition for the system. You can then do all you want with the extra space after the installation. Even make ZFS pool on it.

There's a note: In case of LVM thin, the data pool will only be created if datasize isbigger than 4GB.

So also, if you set the parameter minfree to something large, it will satisfy this:

datasize = hdsize - rootsize - swapsize - minfree

production best practices can be thrown out the window for now. looking looking forward to the next round of replies! my hope is to take another crack at proxmox sometime tonight. thanks again!

If I skipped anything important, let me know. Otherwise separate topics might be even better for separate threads (e.g. LUKS).

Good luck! :)
 
Last edited:
so this is anticlimactic, but i am in and all it was is a setting i was too afraid to change on the server: i had to make sure the B140i controller was set to AHCI mode rather than RAID mode. which i was certain along it had to do with hw vs sw raid and what you two have been saying this whole time as well. i think i am going to toy around a bit and then in a few days blow everything up and reinstall using debian - i really want to do full-disk encryption. thank you both for all of the tips and ideas, there is so much to go explore now. best to you both! cheers!
 
  • Like
Reactions: esi_y
When it comes to BTRFS as a choice in PVE for guests, I actually prefer QCOW2 for VMs (this is not PVE specific preference, anything QEMU/KVM, really). If you have QCOW2, it is counterproductive to have it on anything copy-on-write (BTRFS or ZFS), you can use its snapshots (which are superior). BTRFS does not have equivalent of ZVOLs. But for me on ZFS, they are buggy, so I am back to QCOW2 (or RAW) on ordinary dataset.

When it comes to BTRFS as a choice for LXCs, it works really well, but I do not have a benchmark (this would be comparing ZFS ordinary dataset with ordinary subvolume, something that actually makes sense).

I just wanted to drop here additional piece on QCOW2 snapshots:
https://kashyapc.fedorapeople.org/virt/lc-2012/snapshots-handout.html

Something you will not have with any regular copy-on-write filesystem.
 
  • Like
Reactions: waltar

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!