Proxmox VE 5.0 beta1 fails to boot from ZFS-RAID1 with UEFI-only

Hi,

I have an Intel NUC6i7KYK with the following components:
- Intel Core i7-6770HQ
- 2x 16GB Crucial DDR4-2133 RAM
- 2x Samsung SSD 960 PRO 1TB M.2 NVMe SSD
- 2x Lexar Jumpdrive S45 USB3.0 64GB USB sticks

I want to try to use the USB sticks with RAID1 for the OS (and reduce writes as much as I can, using tmpfs or similar) and the SSDs with RAID1 for the VMs.

I configured the BIOS to UEFI-only boot mode (no legacy boot) and installed Proxmox VE 5.0 beta1 successfully with ZFS-RAID1 (mdraid not supported by the PVE installer) on the two USB sticks.

Unfortunately, the system does not boot afterwards because the PVE installer does not create UEFI boot entries. Looking at both EFI system partitions /dev/sda1 and /dev/sdb1 from SysRescCd, I could not mount them either. Seems there was no filesystem on them.

When enabling legacy BIOS boot mode besides UEFI before the installation, the system can be booted afterwards (with rootdelay=10 for to the USB sticks).

Is it a bug in the PVE installer that booting from ZFS-RAID1 in UEFI mode does not work?

Cheers,
Wolfram
 
Hi,

I have an Intel NUC6i7KYK with the following components:
- Intel Core i7-6770HQ
- 2x 16GB Crucial DDR4-2133 RAM
- 2x Samsung SSD 960 PRO 1TB M.2 NVMe SSD
- 2x Lexar Jumpdrive S45 USB3.0 64GB USB sticks

I want to try to use the USB sticks with RAID1 for the OS (and reduce writes as much as I can, using tmpfs or similar) and the SSDs with RAID1 for the VMs.

I configured the BIOS to UEFI-only boot mode (no legacy boot) and installed Proxmox VE 5.0 beta1 successfully with ZFS-RAID1 (mdraid not supported by the PVE installer) on the two USB sticks.

Unfortunately, the system does not boot afterwards because the PVE installer does not create UEFI boot entries. Looking at both EFI system partitions /dev/sda1 and /dev/sdb1 from SysRescCd, I could not mount them either. Seems there was no filesystem on them.

When enabling legacy BIOS boot mode besides UEFI before the installation, the system can be booted afterwards (with rootdelay=10 for to the USB sticks).

Is it a bug in the PVE installer that booting from ZFS-RAID1 in UEFI mode does not work?

Cheers,
Wolfram

ZFS + UEFI is not supported by the PVE installer. there is no way to sanely sync the ESPs in a mirrored/raidz setup (grub knows how to update multiple legacy / BIOS boot partitions, but not multiple ESPs, and ESPs require vfat as file system..).
 
@fabian thanks for your quick response!

Regarding your point to sync ESPs:

Looking at proxinstall I wonder whether it wouldn't just be possible to run grub-install once for each ESP.

See also:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1229738
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1229738/comments/12
https://launchpadlibrarian.net/151342031/grub-install.diff

Do you think that would work?

I'm still wondering why the installer doesn't create a single ESP filesystem at all (it just creates the partitions) or errors out...
 
@fabian thanks for your quick response!

Regarding your point to sync ESPs:

Looking at proxinstall I wonder whether it wouldn't just be possible to run grub-install once for each ESP.

that would not be a problem at all, but it is besides the point.


no, that is an entirely different problem. booting a ZFS / using EFI is possible without problems in recent grub. the problem is that Grub has no notion of multiple ESP partitions, and ESP partitions need to be vfat-formatted (so cannot be on ZFS or similar). because installing on a mirrored/raidz ZFS pool implies a certain level of replication and robustness, we don't want to install only on a single ESP (which would then become a SPOF).

I'm still wondering why the installer doesn't create a single ESP filesystem at all (it just creates the partitions) or errors out...

if you install with ZFS, the installer does not create an ESP partition. it does create a BIOS boot partition (which is something else entirely) and installs the non-EFI version of Grub there. this works fine, even for multiple devices (Grub will just install itself on all the devices it finds via grub-probe, and that works correctly for ZFS, and AFAIK also for LVM and MD raids).

using the same mechanism for ESPs would not be a good idea IMHO, even if it were implemented already, because finding, mounting, writing, and unmounting vfat partitions automatically is not very robust. also, the ESP is not only used by Grub - even if Grub would correctly re-install itself on all the ESPs upon upgrades, what about other software?

I think I will add some detection logic and error out early to prevent this confusion.
 
Small footnote on the thread. One possible solution. I would recommend (a) don't use USB storage here, (b) maybe just do a stock debian install minimal on SW Raid Linux EXT4 for your 'proxmox' slice mirror space (maybe only use ~100gigs {or less} total). Then once proxmox is alive, booting, and SW Raid linux-ext4-style {nice vanilla plain-jane-easy-reliable-is-good} - ; you can add in ZFS config / use disk space for the remaining ~900gig of non-allocated blocks on the SSD drives, if you are really truly wanting to use ZFS VM Backed storage pool.

T
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!