[SOLVED] Bootloader installation fails when not using the whole disk for rpool

paul477

Member
Mar 14, 2022
8
3
8
62
Hi.

I reproduced the following problem with the installer ISOs 7.2-1, 8.1-1 und 8.1-2 : when I install on a Dell R520 with two 120GB SATA SSDs connected with onboard connectors, configure to use these two disk as ZFS RAID-1 and then choose not to use the whole disk for the rpool, but less (say 80GB), then the bootloader installation fails with
Code:
bootloader setup errors: -unable to init ESP and install proxmox-boot loader on /dev/sdb2
.
When I let the size of the rpool on default PVE is installed with no problem.

Anybody else who saw this?

Best regards

Gerold
 
Hi,

and then choose not to use the whole disk for the rpool, but less (say 80GB), then the bootloader installation fails with
Just to clarify: You've done this by setting the hdsize parameter under Advanced options to (in your case) 80?
Using the GUI or TUI installer? Or does it happen with both?

I have just tried to reproduce this using the latest ISO and could not, at least this way.
 
Hi,

Did I mention UEFI boot mode?
do you have Secure Boot enabled by any chance?

Either way, I did not manage to reproduce it even with this.
So I guess it's something "funny" about the disk controller maybe? Are the disks really directly attached? Or is there still some sort of RAID controller/HBA inbetween that you can configure in the firmware?

Would you maybe mind try the following:
* Boo the latest 8.1-2 ISO, selecting Advanced Options > Install Proxmox VE (Graphical, Debug mode) in the bootloader
* Press Ctrl + D when you are dropped into the first interactive shell
* On the second shell, run lsblk to identify both disks and then wipefs -a /dev/<disk> on both to clear them.
* Press Ctrl + D again to start the installation with a smaller ZFS hdsize

After the installation, run less +G /tmp/install.log and scroll up a bit and see if there are any errors (or even share the whole file by copying it to another USB drive, for example).
 
Are the disks really directly attached? Or is there still some sort of RAID controller/HBA inbetween that you can configure in the firmware?
yes, the boot disks are directly connected to mainboard connectors, are accessed using the ahci driver.

I can do further tests when the machine will have finished it's current task.
Perhaps tomorrow in the afternoon.
 
do you have Secure Boot enabled by any chance?
as TPM is disabled in BIOS I think there is no secure boot.

I followed your very good instructions and got a non-maximum rpool.

I'm pretty sure I had wiped the whole disk before the installation, i.e. cleared all partions as you instructed with the wipefs command, but anyway.

thanks for the prompt reaction, excellent support.

Best regards

Gerold
 
  • Like
Reactions: cheiss
I followed your very good instructions and got a non-maximum rpool.
Great to hear that did it!
Please just mark the thread as SOLVEDby editing the initial post, there should be a dropdown near the title. This helps others to find this thread more easily in the future. :)

I'm pretty sure I had wiped the whole disk before the installation, i.e. cleared all partions as you instructed with the wipefs command, but anyway.
Maybe you just ran wipefs without its -a flag? In that case it does noting.
Anyway, it is actually a good hint at what went wrong and what solves such problems!
 
Anyway, it is actually a good hint at what went wrong and what solves such problems!
wouldn't it be an option to have this wipefs -a included in the setup process?
you create a new partiontable in either case, so why not wipe all possibly existing former partitions and their traces, just in case of?
Or would wipefs generate an error/warning on empty disks?
 
Last edited:
wouldn't it be an option to have this wipefs -a included in the setup process?
you create a new partiontable in either case, so why not wipe all possibly existing former partitions and their traces, just in case of?
Or would wipefs generate an error/warning on empty disks?
We actually do that in the installer already, but this sometimes does not remove all traces of previous ZFS pools - there were some reports of similar cases already, I also managed to reproduce this.
I'm already working on improving the ZFS installation flow a bit, as there are also some other, rather rough edges. :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!