Additional Questions Re: "ZFS: Switch Legacy-Boot to Proxmox Boot Tool"

mattlach

Renowned Member
Mar 23, 2016
181
21
83
Boston, MA
Hi Everyone,

I have a few questions regarding the need to switch to Proxmox Boot Tool for booting from ZFS.

I found the need to do this in reading the release notes for PVE 7.x in preparing for my upgrade from 6.4.9.

Question 1.)

It says the boot will break if I run zpool upgrade on the rpool to upgrade the pool revision. Does this mean that if I keep the pool revision at its existing level it will continue to boot as usual until I can figure out my next steps?


Question 2.)

The guide (located here) assumes that you installed on a recent enough version of PVE that a 512MB partition already exists on the boot devices on which you can install the Proxmox Boot Tool. I can't remember which version of Proxmox I originally ran when I did my install years ago, but mine must predate that, as my boot devices (a ZFS mirror) look like this, lacking the 512MB partition:

# lsblk -o +FSTYPE /dev/sdn /dev/sdm
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT FSTYPE
sdm 8:192 0 465.8G 0 disk
├─sdm1 8:193 0 1007K 0 part
├─sdm2 8:194 0 465.8G 0 part zfs_member
└─sdm9 8:201 0 8M 0 part
sdn 8:208 0 465.8G 0 disk
├─sdn1 8:209 0 1007K 0 part
├─sdn2 8:210 0 465.8G 0 part zfs_member
└─sdn9 8:217 0 8M 0 part

Is there a way to migrate to the Proxmox Boot Manager if you do not have the 512MB partitions? Otherwise I fear I am stuck, because I don't believe ZFS allows for shrinking pools, and I would need to shrink the data partitions (465.8G in my example) in order to fit a 512MB partition for the Proxmox Boot Tool.

The only way I can think of that I might achieve this is to purchase a larger set of SSD's, manually create a partition scheme that looks like the one in the guide:

├─sda1 8:1 0 1007K 0 part zfs_member
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 11.5G 0 part zfs_member

and then zpool replace each of the rpool partitions to the new disks, followed by installing the Proxmox Boot tool to the new disks in the 512MB partitions.

I appreciate any comments and/or suggestions!

Thank you,
Matt
 
Question 1: Yes, as long as you do not 'zpool upgrade' you are safe. Long story short, if your /boot partition is located in same pool (usually rpool) as rest of the OS, then upgrading the pool would render the system unbootable due to the fact that GRUB does not support all zfs features. In specific, GRUB supports only read only features. The solution would be to separate /boot partition from the rest of the OS, by creating a separate zfs pool for it, called bpool for example (and making sure to not upgrade that, but only rpool).More info can be found in the OpenZFS Debian root on zfs guide. I think proxmox uses the same logic for their boot tool.

Question 2: Unfortunately you will have somehow to find space for this bpool partition, and in your case it seems that the only way to achieve that is by...

1. ZFS send the rpool on a different pool.
2. Delete all partitions on the original drive(s).
3. Recreate partitions, rpool and bpool on the original drive(s) or follow Proxmox suggestions as per their wiki.

Hope these answer your questions :)

Y.
 
Question 1: Yes, as long as you do not 'zpool upgrade' you are safe. Long story short, if your /boot partition is located in same pool (usually rpool) as rest of the OS, then upgrading the pool would render the system unbootable due to the fact that GRUB does not support all zfs features. In specific, GRUB supports only read only features. The solution would be to separate /boot partition from the rest of the OS, by creating a separate zfs pool for it, called bpool for example (and making sure to not upgrade that, but only rpool).More info can be found in the OpenZFS Debian root on zfs guide. I think proxmox uses the same logic for their boot tool.

Question 2: Unfortunately you will have somehow to find space for this bpool partition, and in your case it seems that the only way to achieve that is by...

1. ZFS send the rpool on a different pool.
2. Delete all partitions on the original drive(s).
3. Recreate partitions, rpool and bpool on the original drive(s) or follow Proxmox suggestions as per their wiki.

Hope these answer your questions :)

Y.

Thank you. That is a good suggestion.

I gather I'd probably have to do this from a live disk, or risk problems, but that is doable.

Appreciate the suggestion!

Right now I am torn between this method, and the one described on the How To for Debian on the OpenZFS page which walks you through how to create a separate bpool for booting on which you can maintain boot-compatible settings, in addition to the rpool, freeing up the rpool to use whatever performance oriented settings are deemed necessary.

It's probably safer to stick with the official Proxmox way of doing things for now just so any future changes don't break things, but the bpool methodology is tempting...
 
Thank you. That is a good suggestion.

I gather I'd probably have to do this from a live disk, or risk problems, but that is doable.

Appreciate the suggestion!

Right now I am torn between this method, and the one described on the How To for Debian on the OpenZFS page which walks you through how to create a separate bpool for booting on which you can maintain boot-compatible settings, in addition to the rpool, freeing up the rpool to use whatever performance oriented settings are deemed necessary.

It's probably safer to stick with the official Proxmox way of doing things for now just so any future changes don't break things, but the bpool methodology is

If you decide to re-create the partitions on the original drive, then I would go for the proxmox method instead of debian, since it looks like your system and GRUB is being handled by PVE instead of the vanilla Debian. This would save you from future similar issues as you are facing now and make your system inline with proxmox procedures...If possible, I would also go one step further and switch to UEFI boot, again for future compliance ;)
 
If you decide to re-create the partitions on the original drive, then I would go for the proxmox method instead of debian, since it looks like your system and GRUB is being handled by PVE instead of the vanilla Debian. This would save you from future similar issues as you are facing now and make your system inline with proxmox procedures...If possible, I would also go one step further and switch to UEFI boot, again for future compliance ;)

Ugh. UEFI.

I hate EFI booting.

I don't understand why they couldn't leave well enough alone, something that has been working for decades and replace it with UEFI, some marginally functioning trash. The old way was so simple. It just worked.

I have moved some systems of mine to UEFI booting, but only because it is mandatory in order to boot off of NVMe drives. Everything that doesn't need NVMe has stayed on traditional booting for the sake of simplicity.

This particular system is an older Intel server (SuperMicro X9DRI-F with dual Xeon E5-2650v2's).

I know it does not support NVMe booting (at least not without a modded BIOS which seems like a bad idea) but I do not know its level of support for UEFI booting off of SATA. it probably supports it as it is not THAT old, but I still cringe every time I have to deal with UEFI.

I did toy with the idea of getting some first gen Intel NVMe drives (new old stock) and using those for booting, as those are the NVMe drives I am aware of that came with a traditional bootrom, but it seems like it might be a silly thing to do to invest in such old hardware at this point.

I'll probably just put off switching th eboot drives to NVMe until I upgrade the motherboard and CPU's, something I would have done years ago, but I have been putting off, as I'm not looking forward to re-buying all my RAM. (There is a reason I stopped at the last gen that supported DDR3...)

256GB of RAM may not be as much as it used to be, but it still isn't cheap...
 
Just for clarification, creating an UEFI partition does not necessarily mean that you must enable UEFI boot on your current system, either if it supports it or not. You could simply create UEFI partition(s) of ~512M in size and keep them for future use. Your current system seem to have already 2 partitions (sdm1 and smn1) of 1MB in size which correspond to the GPT BIOS (legacy) boot partitions that GRUB is using to boot your current system. Creating 2 more 512M partitions for UEFI boot would not harm. In this way your system would be ready for both UEFI and BIOS boot options. I strongly suggest you having a good read on how GPT BIOS and UEFI boot works before taking any actions. I'd personally experiment in VMs first and once you have a good understanding proceed to the physcial server migration.

Yannis
 
  • Like
Reactions: mattlach
Just for clarification, creating an UEFI partition does not necessarily mean that you must enable UEFI boot on your current system, either if it supports it or not. You could simply create UEFI partition(s) of ~512M in size and keep them for future use. Your current system seem to have already 2 partitions (sdm1 and smn1) of 1MB in size which correspond to the GPT BIOS (legacy) boot partitions that GRUB is using to boot your current system. Creating 2 more 512M partitions for UEFI boot would not harm. In this way your system would be ready for both UEFI and BIOS boot options. I strongly suggest you having a good read on how GPT BIOS and UEFI boot works before taking any actions. I'd personally experiment in VMs first and once you have a good understanding proceed to the physcial server migration.

Yannis

Good suggestions. Thank you.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!