Manual setup for zfs RAIDZ-1 root filesystem on different size disks (proxmox-boot-tool not grub)

Replicant

Member
May 31, 2021
9
3
8
41
I wanted to have a redundant zfs rpool so if either drive fails the system will still boot. However my boot volumes are not identically sized. Apparently the installer does not allow this. There exists an older thread where a user describes installing proxmox's zfs rpool on differently sized disks in an identical setup.

In short: 1) install on smaller disk, 2) replicate partitions to larger disk & regenerate UUIDs, 3)install grub on larger disk, 4) attach larger disk to zfs rpool.

I dutifully followed these steps until I hit an error when installing grub to the large disk since proxmox doesn't use grub anymore:

"grub-install is disabled because this system is booted via proxmox-boot-tool"

My understanding is that the current ESP is on the small disk's ESP and I need to sync it to the large disk's ESP with proxmox-boot-tool in an analogous manner to grub-install

According to this page, 3 partitions are created in pve:

- a 1 MB BIOS Boot Partition
- a 512 MB EFI System Partition (ESP)
- a third partition (remaining space)

I verified with "proxmox-boot-tool status", that there indeed was only one ESP.

Following the instructions in "Setting up a new partition for use as synced ESP", I used the following commands

proxmox-boot-tool format /dev/[large disk part2]
proxmox-boot-tool init /dev/[large disk part2]
proxmox-boot-tool refresh

I did not record the output of the commands, but they were all successful. "proxmox-boot-tool status" verified two ESP's. Now with the 2nd partition done, I then attached the 3rd zfs partitions:

zpool attach rpool /dev/[small disk part3] /dev/[large disk part3]

This was successful: zpool status -v showed a newly created mirror0. I booted with either drive removed, each time it would boot and show the rpool in a degraded state. Reinserting both drives and booting restored the rpool to a normal state.

I am not really well versed with linux or proxmox. I just wanted to create a redundant boot volume and it seems I succeeded. The evidence to show this is that booting with either drive removed to simulate hardware failure indeed had no effect on the bootup process. The only alternative was that the system would fail to boot with one drive removed, and that was not the case.

Question 1: is this the correct way to do this?

Question 2: do I need to mirror the 1st BIOS Boot partition in any way?

Here's my thought process, in the steps I followed I replicated the partition table from the smaller disk to larger. My understanding is that it creates an partition table matching the source disk but there are no actual formatted partitions in the space. proxmox-boot-tool formatted the 2nd partition with an ESP and zfs mirrored the 3rd partition. I never did anything to mirror the 1st partition. I tried mounting both of the 1MB partitions to see what they contained but the mount command wouldn't work on them.

In my UEFI/BIOS there are 4 boot entries:

Linux Boot Manager (small disk)
Linux Boot Manager (large disk)
UEFI OS (small disk)
UEFI OS (large disk)

I assume that "UEFI OS" entries are the UEFI ESP's, and the "Linux Boot Manager" entries are the "BIOS boot partitions"

Since all four entries can boot into proxmox, this indicates that the "BIOS boot partition" is working on both disks and I don't need to do anything, right?
 
Last edited:
  • Like
Reactions: dxun and Frunk
Question 1: is this the correct way to do this?
Yeah, I would not have done it differently, and you tested a boot with only either of each disk available, and it worked.

Question 2: do I need to mirror the 1st BIOS Boot partition in any way?
You can ignore it. Only the EFI/ESP partition is needed.

And yes, your though process is correct. The proxmox-boot-tool formats and initializes the EFI partition, meaning that it contains the boot loader and the current kernel images.
I assume that "UEFI OS" entries are the UEFI ESP's, and the "Linux Boot Manager" entries are the "BIOS boot partitions"

Since all four entries can boot into proxmox, this indicates that the "BIOS boot partition" is working on both disks and I don't need to do anything, right?
This is more likely due to the fact, that the bootloader exists in two places on the EFI disk. First in the default location where the motherboard will look for it, if it is not explicitly configured (EFI/BOOT/BOOTX64.EFI) and the second one, for the same systemd-boot loader, but located at its own path (EFI/SYSTEMD/SYSTEMD-BOOTX64.EFI).

If you run efibootmgr -v you can see these entries, for example on my machine:
Code:
$ efibootmgr -v
BootCurrent: 0000
Timeout: 1 seconds
BootOrder: 0000,0001,0003,0002
Boot0000* Linux Boot Manager    HD(2,GPT,749e3139-ceb5-4205-bc59-1bf128a81db5,0x800,0x100000)/File(\EFI\SYSTEMD\SYSTEMD-BOOTX64.EFI)
Boot0001* UEFI OS    HD(2,GPT,749e3139-ceb5-4205-bc59-1bf128a81db5,0x800,0x100000)/File(\EFI\BOOT\BOOTX64.EFI)..BO
[..]

If you mount an EFI partition, you can see these files. Just make sure to unmount it again as otherwise the proxmox-boot-tool will have some troubles the next time it runs.
 
  • Like
Reactions: Replicant
Hello Replicant,
I would like to say thank you for this guide. Worked like a charm. Also tested rebooting without one of the disks attached and the system booted correctly.

I however did a small change and as Proxmox is now defining the pools with the by-id method I did the same. Therefore my zpool attach command looked like this
zpool attach rpool /dev/disk/by-id/[small disk part3] /dev/disk/by-id/[large disk part3]

One can look up the by-id names of the disks using the command
ls -l /dev/disk/by-id/|grep sdX
where you insert for X your letter. One can use lsblk to list the disks or check in the gui under Node/Disks
 
As of Proxmox 8.1.4, the installer seems to allow installation of Proxmox on a ZFS pool consisting of two unequally-sized disks. During installation, I simply selected the lower-sized disk _first_ and proceeded to install zfs in RAID1 configuration. Here is how things look like immediately post-installation.

1710726024255.png
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!