Upgrade to PVE7 with legacy boot

Jon Massey

Member
Dec 23, 2017
4
0
21
38
I've got a couple of hosts that were originally installed with PVE5.0 that have now been upgraded throughout the years to 6.4. The time has come to upgrade to 7.0, but as my hosts are configured with legacy boot from ZFS I am concerned that doing so will leave them in an unbootable state.

I've been through the Legacy-boot-to-proxmox-boot wiki article, and found that ls /sys/firmware/efi does indeed output "No file or directory" indicating I am booting using the legacy GRUB setup. When "Finding potential ESPs", I hit a problem, in that on my boot ZFS mirrored disks, there isn't a sufficiently sized empty partition:

Code:
...
sdi           8:128  0  74.5G  0 disk
├─sdi1        8:129  0  1007K  0 part
├─sdi2        8:130  0  74.5G  0 part            zfs_member
└─sdi9        8:137  0     8M  0 part
sdj           8:144  0  74.5G  0 disk
├─sdj1        8:145  0  1007K  0 part
├─sdj2        8:146  0  74.5G  0 part            zfs_member
└─sdj9        8:153  0     8M  0 part
...

As I understand it, I am unable to shrink sd[ji]2 in order to grow sd[ji]1 such that they are big enough for use as ESPs - can anyone make a suggestion as to how I can resolve this issue, or indeed whether I am unable to proceed with the upgrade without breaking my bootloader?
 
Putting and ESP (vfat 512M) on USB flash memory (or two) would probably work fine, as it not often written.
Resizing the rpool involves booting from some Live CD that supports (a recent version of) ZFS, deleting half of your mirror, creating a new smaller pool (with an ESP), copying everything over, deleting the other half and using it to mirror the new pool (and ESP) and renaming the pool to rpool.
 
One way would be to do a fresh installation of the servers if you can.

For the following, a big disclaimer! You better make a full disk backup and try the procedure in a VM first to test it for any issues first!

Is the rpool mirrored over 2 disks? Then you could, if you want the challenge, attempt to remove one of the disks from the mirror (man zpool-detach), recreate the partition table with one more partition to use as the EFI partition, then create a new single disk zpool on the now slightly smaller partition. Then send/recv the original rpool datasets over to the new one, make it bootable again. Rename the pools so that the pool on the recreated disk with the EFI partition is the rpool and see if the system comes up. If it does, follow the steps of the chapter "Changing a failed boot device" roughly, but instead of replacing the disk, you attach (man zpool-attach) it to the existing pool and device to make it a mirror again.

This is just a rough outline of the involved steps and you better create a VM with a similar issue to then snapshot and test out the procedure!

I personally had to do something similar a few months ago when I had to move the whole system to smaller disks. This might help you, even though not all steps need to be done exactly like this in your situation: https://aaronlauterer.com/blog/2021/proxmox-ve-migrate-to-smaller-root-disks/
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!