[SOLVED] PVE ZFS mirror installation without 512MByte Partition - how to convert to UEFI boot?

Rainerle

Renowned Member
Jan 29, 2019
122
36
68
Hi,
I have a older installation of a 3-node Proxmox Ceph cluster - probably 5.3 or older. The OS disks look like this:
Bash:
root@proxmox01:~# zpool status
  pool: rpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 00:03:08 with 0 errors on Sun Jun 13 00:27:09 2021
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdb2    ONLINE       0     0     0

errors: No known data errors
root@proxmox01:~# fdisk /dev/sda

Welcome to fdisk (util-linux 2.33.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors
Disk model: SSDSC2KB240G7L
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: F72122C0-075C-4CC9-99FC-614D74CF3CB4

Device         Start       End   Sectors   Size Type
/dev/sda1         34      2047      2014  1007K BIOS boot
/dev/sda2       2048 468845709 468843662 223.6G Solaris /usr & Apple ZFS
/dev/sda9  468845710 468862094     16385     8M Solaris reserved 1

Partition 1 does not start on physical sector boundary.
Partition 9 does not start on physical sector boundary.

Command (m for help):
root@proxmox01:~# fdisk /dev/sdb

Welcome to fdisk (util-linux 2.33.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/sdb: 223.6 GiB, 240057409536 bytes, 468862128 sectors
Disk model: SSDSC2KB240G7L
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 3723FDAA-70D3-4879-9FC9-E6388E8B538B

Device         Start       End   Sectors   Size Type
/dev/sdb1         34      2047      2014  1007K BIOS boot
/dev/sdb2       2048 468845709 468843662 223.6G Solaris /usr & Apple ZFS
/dev/sdb9  468845710 468862094     16385     8M Solaris reserved 1

Partition 1 does not start on physical sector boundary.
Partition 9 does not start on physical sector boundary.

Command (m for help):

So there is no 512M partition I could use for the EFI boot partition.

I would like to switch to a UEFI boot, but how do I cut out that 512M UEFI partition?
 
In order of cleanest and simplest to more messy involved options:
* simply reinstall the machine from a new PVE ISO and restore your vm's from backup
* add 2 fresh small disks and use those with proxmox-boot-tool (they are usually not that much written to (only during a kernel-upgrade) - so maybe even a USB-drive might do (but make sure to have a working backup of it
* detach one disk from the pool wipe it's partitioning - create a new one with 512 MB space for the ESP, zfs send the pool-contents over - > install the boot loader to boot from it , reboot into the new disk - attach the second one to the pool to get redundancy back (this one I would really only recommend if you have enough backups, experience, patience, and don't mind losing time and maybe data)

I hope this helps!
 
The VMs all reside on Ceph RBDs shared between three nodes. I need to change all three nodes. So from my point of view I think splitting the ZFS mirror, repartition, zfs send the contents and reboot should be the easiest.
 
So I was able to change the disk layout online by doing this:
Bash:
zpool status
# !!! Be careful with device names and partition numbers!!!!
zpool detach rpool sdb2
cfdisk /dev/sdb # Keep only partition 1 (BIOS), create partition 2 with EFI and partition 3 with ZFS
fdisk -l /dev/sdb
# Should look like this:
# Device       Start       End   Sectors   Size Type
# /dev/sdb1       34      2047      2014  1007K BIOS boot
# /dev/sdb2     2048   1050623   1048576   512M EFI System
# /dev/sdb3  1050624 468862094 467811471 223.1G Solaris /usr & Apple ZFS
zpool attach rpool sda2 sdb3 # Reattach and wait till it resilvered
proxmox-boot-tool format /dev/sdb2 --force
proxmox-boot-tool init /dev/sdb2
# And the first disk
zpool detach rpool sda2
cfdisk /dev/sda # Same as above
fdisk -l /dev/sda
zpool attach rpool sdb3 sda2 # Reattach and wait till it resilvered
proxmox-boot-tool format /dev/sda2 --force
proxmox-boot-tool init /dev/sda2

So this worked out fine - but it still only boots with the Legacy BIOS enabled...

Code:
root@proxmox01:~# dpkg -l | grep grub
ii  grub-common                          2.02+dfsg1-18-pve1                      amd64        GRand Unified Bootloader (common files)
ii  grub-efi-amd64-bin                   2.02+dfsg1-18-pve1                      amd64        GRand Unified Bootloader, version 2 (EFI-AMD64 modules)
ii  grub-efi-ia32-bin                    2.02+dfsg1-18-pve1                      amd64        GRand Unified Bootloader, version 2 (EFI-IA32 modules)
ii  grub-pc                              2.02+dfsg1-18-pve1                      amd64        GRand Unified Bootloader, version 2 (PC/BIOS version)
ii  grub-pc-bin                          2.02+dfsg1-18-pve1                      amd64        GRand Unified Bootloader, version 2 (PC/BIOS modules)
ii  grub2-common                         2.02+dfsg1-18-pve1                      amd64        GRand Unified Bootloader (common files for version 2)
root@proxmox01:~#

How do I switch to systemd-boot ???
If I disable the Legacy BIOS it won't find anything to boot from and in Partition 2 is nothing to use to boot...
 
Ok, took some time to find out...
proxmox-boot-tool does not prepare the systemd-boot configuration if /sys/firmware/efi does not exist - so to prepare the sda2/sdb2 filesystem for systemd-boot before booting using UEFI I had to remove those checks from /usr/sbin/proxmox-boot-tool.
 
Last edited:
Ok, took some time to find out...
proxmox-boot-tool does not prepare the systemd-boot configuration if /sys/firmware/efi does not exist - so do prepare the sda2/sdb2 filesystem for systemd-boot before booting using UEFI I had to remove those checks from /usr/sbin/proxmox-boot-tool.
What lines did you remove? It is pretty bad that this isn't implemented directly into proxmox-boot-tool.
 
The VMs all reside on Ceph RBDs shared between three nodes. I need to change all three nodes. So from my point of view I think splitting the ZFS mirror, repartition, zfs send the contents and reboot should be the easiest.
I agree, I think his order was actually reversed ;)