Upgraded from 7.4 to 8.4 - zpool upgrade safe, what do I need to check?

GuiltyNL

Well-Known Member
Jun 28, 2018
45
17
48
Rotterdam
I have three Proxmox servers upgraded from 7.4 to 8.4.

Two have Legacy BIOS boot mode and one EFI boot mode.

I think the proxmox-boot-tool already initialized the disks, because it is giving me:

Legacy Server 1:
Code:
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
47E2-781C is configured with: grub (versions: 5.15.158-2-pve, 6.8.12-16-pve)
47E2-D41C is configured with: grub (versions: 5.15.158-2-pve, 6.8.12-16-pve)
47E3-2456 is configured with: grub (versions: 5.15.158-2-pve, 6.8.12-16-pve)
47E3-74E8 is configured with: grub (versions: 5.15.158-2-pve, 6.8.12-16-pve)
47E3-C722 is configured with: grub (versions: 5.15.158-2-pve, 6.8.12-16-pve)
47E4-15A8 is configured with: grub (versions: 5.15.158-2-pve, 6.8.12-16-pve)
Code:
 pool: rpool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:14:27 with 0 errors on Sun Jan 11 00:38:28 2026
config:

        NAME                                                  STATE     READ WRITE CKSUM
        rpool                                                 ONLINE       0     0     0
          mirror-0                                            ONLINE       0     0     0
            ata-INTEL_SSDSC2KG240G7_PHYM82110235240AGN-part3  ONLINE       0     0     0
            ata-INTEL_SSDSC2KG240G7_PHYM8211001Q240AGN-part3  ONLINE       0     0     0
          mirror-1                                            ONLINE       0     0     0
            ata-INTEL_SSDSC2KG240G7_PHYM82110245240AGN-part3  ONLINE       0     0     0
            ata-INTEL_SSDSC2KG240G7_PHYM82110232240AGN-part3  ONLINE       0     0     0
          mirror-2                                            ONLINE       0     0     0
            ata-INTEL_SSDSC2KG240G7_PHYM8211013H240AGN-part3  ONLINE       0     0     0
            ata-INTEL_SSDSC2KG240G7_PHYM8211001E240AGN-part3  ONLINE       0     0     0

errors: No known data errors

Legacy Server 2:
Code:
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
A31D-66ED is configured with: grub (versions: 5.15.158-2-pve, 6.8.12-16-pve)
A31D-C23A is configured with: grub (versions: 5.15.158-2-pve, 6.8.12-16-pve)
A31E-0718 is configured with: grub (versions: 5.15.158-2-pve, 6.8.12-16-pve)
A31E-51DA is configured with: grub (versions: 5.15.158-2-pve, 6.8.12-16-pve)
A31E-96BD is configured with: grub (versions: 5.15.158-2-pve, 6.8.12-16-pve)
A31E-E362 is configured with: grub (versions: 5.15.158-2-pve, 6.8.12-16-pve)
Code:
pool: rpool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:07:14 with 0 errors on Sun Jan 11 00:31:15 2026
config:

        NAME                                                  STATE     READ WRITE CKSUM
        rpool                                                 ONLINE       0     0     0
          mirror-0                                            ONLINE       0     0     0
            ata-INTEL_SSDSC2KG240G7_PHYM8211023V240AGN-part3  ONLINE       0     0     0
            ata-INTEL_SSDSC2KG240G7_PHYM8211025Z240AGN-part3  ONLINE       0     0     0
          mirror-1                                            ONLINE       0     0     0
            ata-INTEL_SSDSC2KG240G7_PHYM821101KQ240AGN-part3  ONLINE       0     0     0
            ata-INTEL_SSDSC2KG240G7_PHYM8211003K240AGN-part3  ONLINE       0     0     0
          mirror-2                                            ONLINE       0     0     0
            ata-INTEL_SSDSC2KG240G7_PHYM821100BF240AGN-part3  ONLINE       0     0     0
            ata-INTEL_SSDSC2KG240G7_PHYM821100D1240AGN-part3  ONLINE       0     0     0

errors: No known data errors

EFI Server:
Code:
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
091E-7E9A is configured with: uefi (versions: 5.15.143-1-pve, 5.15.158-2-pve, 6.8.12-16-pve)
091E-DA20 is configured with: uefi (versions: 5.15.143-1-pve, 5.15.158-2-pve, 6.8.12-16-pve)
091F-3223 is configured with: uefi (versions: 5.15.143-1-pve, 5.15.158-2-pve, 6.8.12-16-pve)
0920-3F84 is configured with: uefi (versions: 5.15.143-1-pve, 5.15.158-2-pve, 6.8.12-16-pve)
Code:
pool: rpool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:12:31 with 0 errors on Sun Jan 11 00:36:32 2026
config:

        NAME                                                 STATE     READ WRITE CKSUM
        rpool                                                ONLINE       0     0     0
          mirror-0                                           ONLINE       0     0     0
            nvme-eui.01000000000000000014ee81000038df-part3  ONLINE       0     0     0
            nvme-eui.01000000000000000014ee8100003856-part3  ONLINE       0     0     0
          mirror-1                                           ONLINE       0     0     0
            nvme-eui.01000000000000000014ee81000038ed-part3  ONLINE       0     0     0
            nvme-eui.01000000000000000014ee8100003896-part3  ONLINE       0     0     0

errors: No known data errors

Are there other things to check? Or am I good to go to run:
Bash:
zpool upgrade

Let me know! Because I really don't want to break my boot. :eek:

The next step would be to update 8.4 to 9.1.
 
Yes, as a backup-plan that's a good idea. But for now the question keeps unanswered:

Am I good to go in this situation?

As I can see the proxmox-boot is active, thus in a normal situation zpool upgrade should be able to work, without breaking my boot. Correct?
 
I Run Proxmox VE and BS with ZFS for some years and had no Problem with internal ZFS Upgrades.
But not all my desktop System run the same zfs version for access the newer zfs disks.
Your way is to go to Proxmox VE 9.x, then this is only one step.
 
  • Like
Reactions: uzumo
Updating a ZFS pool is irreversible, so you should not perform it until you have confirmed all machines are functioning properly for a sufficient period.

A newer kernel may affect your virtual machine or hardware.

In that case, you may need to try downgrading the kernel.

You can only switch to a kernel that can handle the same ZFS pool. Rolling back to a different kernel is not possible.

There are people whose environments have been broken because they wanted this feature.


If your virtual machines are important to you, you should probably use them without upgrading for several months.

Eventually, you'll even forget about upgrading altogether.
 
Yes, I understand this situation. I'm now on Proxmox 8, but I could revert to 7 and using my current ZFS pool, because it has not been upgraded yet. But eventually I would need go to 9, because security updates only will last untill this summer for 8.

However I do understand not to upgrade immediately to the new ZFS, and testing the stability first. But after some weeks I can consider it as stable and I want to do the upgrade of the pools.

So, again, the question of this thread is: Is the promox-boot ok enough to do the upgrade without getting boot issues after the upgrade on my two legacy bios hypervisors?