VE 4.x + ZFS + Proliant DL380 G9 = help!

NSomers

New Member
Mar 30, 2017
4
1
3
Hey guys!

First post here, so go easy on me :D I'm new to Proxmox and Linux administration in general so maybe this is simple, but from my research, I don't seem to be the only one struggling with this. Here's the gist of my situation:

I have been tasked with setting up a new Proxmox 4.x server. The goal is to install it to a ZFS volume in RAID10 (8x 1.2TB disks) with a P440ar controller. The controller is set to HBA mode and BIOS set with UEFI mode enabled (as Proxmox claims it is supported in the latest release).

The behavior is as follows:

With UEFI, I can boot to the VE ISO and install it to a newly created ZFS volume consisting of the above mentioned 8 disks in RAID10. After installation, however, I am presented with a number of, what I assume are, GRUB errors. It thinks there's no OS to boot to and creates a boot loop as if there are no disks installed.

With Legacy Boot enabled, I get the same behavior as above. However, if I remove all of the disks but 1, and install VE to that, we boot right into the VE console without issue. I figured at this point I could simply shut down the machine, add the remaining disks, and expand the ZFS pool, but adding the disks back to the machine returns us to the boot loop problem.

I'm not sure if it's some kind of limitation in the Proxmox installer? If so, I assumed a potential solution is to install Proxmox over a Debian install? Would that involve use of a Debian Live CD to create the ZFS volume where we'll install Debian, followed by an install of Proxmox over that? Or am I dealing with some kind of hardware incompatibility / misconfiguration? My lack of experience is to blame on that one, but I'm enjoying the learning opportunity thus far!

So, kinda tired of scratching my head on this one, figured I'd reach out to the local gurus :D

Thanks all
 
  • Like
Reactions: fireon
ZFS as root and UEFI is a combination which the PVE installer (currently) does not support, because Grub cannot handle redundant EFI partitions.

ZFS as root with "legacy" bios should work - you only need to take care that your BIOS hands at least the subset of disks that is needed to read your root pool to Grub. Usually this means marking those disks as boot devices, but with some mainboards / raid controllers, it is not possible for all disks.

For example, if you have an 8 disk RAID10, your pool looks like this:
Code:
pool
- mirror
-- disk 1
-- disk 2
- mirror
-- disk 3
-- disk 4
- mirror
-- disk 5
-- disk 6
- mirror
-- disk 7
-- disk 8

On disk 1 and 2, a "BIOS boot partition" is created in addition to the ZFS partition, and Grub is installed there. For boot to work, your BIOS must start Grub on either disk 1 or disk 2. But you also need to mark at least one half of each of the other mirror pairs as bootable and Grub needs to "see" them in order to allow Grub to read the rest of Grub, the kernel and the initrd from the root pool. To troubleshoot this situation further, you would need to post the error messages you get, and if you end up in a Grub rescue shell, the output of "ls" and "set" would also be helpful.
 
  • Like
Reactions: NSomers
To troubleshoot this situation further, you would need to post the error messages you get, and if you end up in a Grub rescue shell, the output of "ls" and "set" would also be helpful.
Very helpful, thanks. I have some time to play with it more tomorrow and will return with more information. Thanks again ;)