zpool upgrade

dodi71

Member
Jan 8, 2020
5
0
6
53
Is it safe to use 'zpool upgrade' on the rpool and nas pool. the Proxmox-boot-tool status is:

04AB-4804 is configured with: uefi (versions: 5.3.18-3-pve, 5.4.106-1-pve, 5.4.98-1-pve), grub (versions: 5.13.19-6-pve, 5.15.35-1-pve)
04AB-8A6C is configured with: uefi (versions: 5.3.18-3-pve, 5.4.106-1-pve, 5.4.98-1-pve), grub (versions: 5.13.19-6-pve, 5.15.35-1-pve)

regards,
 
Is it safe to use 'zpool upgrade' on the rpool and nas pool. the Proxmox-boot-tool status is:
please post the complete output of proxmox-boot-tool status - the line indicating how that system is currently booted is missing
(it seems as if the system used to be booted with UEFI but switched to legacy with 7.1?! This sounds rather odd)

else to verify which bootloader is used - please check the reference documenation:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_determine_bootloader_used

pools, which are not rpool should be safe for upgrading in any case

I hope this helps!
 
Thanks @Stoiko Ivanov for your reply. I verified the boot loader used by the system and it is confirmed to be Grub. I have noticed that I have two kernels i.ei :

Automatically selected kernels:
5.13.19-6-pve
5.15.35-1-pve

and the pinned kernel is:

Pinned kernel:
5.15.35-1-pve

I was thinking to remove the old version (5.13.19-6-pve) but wanted to confirm if it is safe?

BTW, I have upgraded the NAS ZFS without any problems. now what is left is the rpool ZFS.

Regards,
 
and the pinned kernel is:

Pinned kernel:
5.15.35-1-pve
there is no need to pin the latest kernel (it's the one that gets booted automatically anyways)

I was thinking to remove the old version (5.13.19-6-pve) but wanted to confirm if it is safe?
I usually would recommend to keep at least one older version around (in case some upgrade of 5.15.35-1-pve breaks with your hardware you can still boot into 5.13.19-6-pve - and PVE's tooling takes care of removing older version, so they don't waste too much disk-space

I hope this helps!
 
Hi,

because "zpool status" advices me to use "zpool upgrade"...

Code:
# zpool status
  pool: rpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 1 days 01:17:00 with 0 errors on Mon Aug 15 01:41:02 2022
config:

        NAME                              STATE     READ WRITE CKSUM
        rpool                             ONLINE       0     0     0
          raidz2-0                        ONLINE       0     0     0
            wwn-0x50014ee0aeffd3a2-part2  ONLINE       0     0     0
            wwn-0x50014ee20a994153-part2  ONLINE       0     0     0
            wwn-0x50014ee2b8e68c2d-part2  ONLINE       0     0     0
            wwn-0x50014ee2ba1acc3e-part2  ONLINE       0     0     0

errors: No known data errors
#
...my question is the same as the original poster because the given answer are not completely satisfying for me: Is it safe to "zpool upgrade"?

I'm not aware of using special ZFS features and overall using a Proxmox "default installation", currently Proxmox Virtual Environment 6.4-14, which I want to upgrade to the current version in the next days.

Because it was requested before my output of "proxmox.boot-tool status" is:

Code:
# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
E: /etc/kernel/proxmox-boot-uuids does not exist.
 
I'm not aware of using special ZFS features and overall using a Proxmox "default installation", currently Proxmox Virtual Environment 6.4-14, which I want to upgrade to the current version in the next days.

Because it was requested before my output of "proxmox.boot-tool status" is:
from the output I guess you're still using grub to directly boot from ZFS - such setups are in general _not_ safe to run zpool upgrade on!
the procedure to adapt this to use proxmox-boot-tool is described in the pve-wiki:
https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool

I hope this helps!
 
  • Like
Reactions: r4dh4l
from the output I guess you're still using grub to directly boot from ZFS - such setups are in general _not_ safe to run zpool upgrade on!
the procedure to adapt this to use proxmox-boot-tool is described in the pve-wiki:
https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool

I hope this helps!
Thank you @Stoiko Ivanov, I'm really happy that I asked before trying it. Yes, I installed Promox when Proxmox 4 or 5 was the current version so it is using grub.
Given that I want to upgrade from Proxmox 6 to 7: Is it mandatory that I do a zpool upgrade (and for this to switch from legacy boot to proxmox boot tool) before I upgrade to Proxmox 7 or can I just keep the current zpool version (and grub) without going into some trouble with Proxmox 7?
 
Yes, I installed Promox when Proxmox 4 or 5
Hmm - that's been quite a while - and in the meantime the partitioning scheme of the installer changed (a few times) - in order to comfortably use proxmox-boot-tool you need to have a free partition with a size of 512M (smaller might work as well - but you need to manually clear out some kernels every now and then) - since this partition stores the kernel-images and initrds (which can become quite large).
(the wiki-page I linked explains how to find these)

Given that I want to upgrade from Proxmox 6 to 7: Is it mandatory that I do a zpool upgrade (and for this to switch from legacy boot to proxmox boot tool)
No - it's not strictly mandatory - however in my experience booting with grub directly from ZFS is quite fragile (we had a few reports where systems stopped working all of a sudden - and the reasons were quite hard to track down (grub-implementation of an old HP raid-card driver was not able to read past 2G,....)) - so I personally would prefer to use proxmox-boot-tool - additionally you don't have to worry that you render your system unbootable by accidentally upgrading your rpool.

If possible you could consider installing PVE freshly and then restoring your guests from a backup.
else - make sure you have a backup - and I assume if your system ran fine till now it will run after the upgrade as well
 
  • Like
Reactions: luison and r4dh4l
No - it's not strictly mandatory - however in my experience booting with grub directly from ZFS is quite fragile
Just survived the upgrade to VE 7, thank you! :)

If possible you could consider installing PVE freshly and then restoring your guests from a backup.
else - make sure you have a backup - and I assume if your system ran fine till now it will run after the upgrade as well
That's the point again and again: Nobody told me when I planned to run my own (home) server that the budget should be enough to actually buy two servers (one for production, one for backup). On the other hand: If someone would have told me that I actually need two servers for a serious setup I would never hab beend started running my own server. ;)
But yes, I have to consider a backup system. Maybe the Proxmox Backup Server as VM as part of my current server is a cheap solution with just one additional external hard drive.

However, thanks again for your support. Need some sleep now...
 
Last edited:
That's the point again and again: Nobody told me when I planned to run my own (home) server that the budget should be enough to actually buy two servers (one for production, one for backup). On the other hand: If someone would have told me that I actually need two servers for a serious setup I would never start with my own server. ;)
But yes, I have to consider a backup system.

Sorry, but this should be common sense.
If you care about your data and the time and work/effort you have invested in installing and setting up your systems, it should be obvious to have reasonable backups of all of this. (BTW: Raid is no backup!)
 
  • Like
Reactions: r4dh4l
Sorry, but this should be common sense.
If you care about your data and the time and work/effort you have invested in installing and setting up your systems, it should be obvious to have reasonable backups of all of this. (BTW: Raid is no backup!)
Sure it is, but it is also a question of ressources to backup a server in the "common sense". The question is: Can I operate a server to avoid throwing my data into common "free" clouds like Google Drive and using "free" messengers like WhatsApp? Yes, in my case I can. But can I operate a server *and* a full backup system in terms of time, energy, license and hardware costs? No, in my case I can not, because the costs are simply too high for me. So shall I use GoogleDrive and WhatsApp then because I can not do everything as in the "common sense"? No, because although you can not do everything in the right way you should'nt do everything in the wrong way. And in my opinion the wronger way would be using "free" GAFAM services because I can not operate a "common sense" backup system.

Anyway thank you for your reply. It caused me to ask for concepts for a low cost backup system: https://forum.proxmox.com/posts/499133/
 
in order to comfortably use proxmox-boot-tool you need to have a free partition with a size of 512M (smaller might work as well - but you need to manually clear out some kernels every now and then) - since this partition stores the kernel-images and initrds (which can become quite large).
Will a 286MB partition work if you just got the few kernels PVE prevents from removing when using apt autoremove aufter each update?
I'm also still using grub here. Created a unused 286MB partition back then to be able to maybe switch to ESP but then never did it because it wasn't the 512MB that I thought PVE would require.

Not that easy to free up space to extend the potential ESP:
Code:
NAME                              MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                 8:0    0  93.2G  0 disk
├─sda1                              8:1    0     1M  0 part
├─sda2                              8:2    0   286M  0 part
├─sda3                              8:3    0   488M  0 part
│ └─md0                             9:0    0   487M  0 raid1 /boot
└─sda4                              8:4    0  92.4G  0 part
  └─md1                             9:1    0  92.3G  0 raid1
    └─md1_crypt                   253:0    0  92.3G  0 crypt
      ├─vgpmx-lvroot              253:1    0    21G  0 lvm   /
      └─vgpmx-lvswap              253:2    0    61G  0 lvm   [SWAP]
sdb                                 8:16   0  93.2G  0 disk
├─sdb1                              8:17   0     1M  0 part
├─sdb2                              8:18   0   286M  0 part  /boot/efi
├─sdb3                              8:19   0   488M  0 part
│ └─md0                             9:0    0   487M  0 raid1 /boot
└─sdb4                              8:20   0  92.4G  0 part
  └─md1                             9:1    0  92.3G  0 raid1
    └─md1_crypt                   253:0    0  92.3G  0 crypt
      ├─vgpmx-lvroot              253:1    0    21G  0 lvm   /
      └─vgpmx-lvswap              253:2    0    61G  0 lvm   [SWAP]

Anyway thank you for your reply. It caused me to ask for concepts for a low cost backup system
Cheap backup system would be two USB HDDs you rotate and store one of them not at home. Vzdump should be fine for guest backups, you could use something like rsync to backup the hosts config files and making these USB HDDs bootable with clonezilla would even allow you to backup the whole PVE server on block level. No need for a second server as long as you don't need any redundancy and you don't forget to do your weekly manual backups.
 
Last edited:
  • Like
Reactions: r4dh4l
Sure it is, but it is also a question of resources to backup a server in the "common sense". The question is: Can I operate a server to avoid throwing my data into common "free" clouds like Google Drive and using "free" messengers like WhatsApp? Yes, in my case I can. But can I operate a server *and* a full backup system in terms of time, energy, license and hardware costs? No, in my case I can not, because the costs are simply too high for me. So shall I use GoogleDrive and WhatsApp. Then because I can not do everything as in the "common sense"? No, because although you can not do everything in the right way you should'nt do everything in the wrong way. And in my opinion the wronger way would be using "free" GAFAM services because I can not operate a "common sense" backup system.
Right !
 
Sure it is, but it is also a question of ressources to backup a server in the "common sense". The question is: Can I operate a server to avoid throwing my data into common "free" clouds like Google Drive and using "free" messengers like WhatsApp? Yes, in my case I can. But can I operate a server *and* a full backup system in terms of time, energy, license and hardware costs? No, in my case I can not, because the costs are simply too high for me. So shall I use GoogleDrive and WhatsApp then because I can not do everything as in the "common sense"? No, because although you can not do everything in the right way you should'nt do everything in the wrong way. And in my opinion the wronger way would be using "free" GAFAM services because I can not operate a "common sense" backup system.

Anyway thank you for your reply. It caused me to ask for concepts for a low cost backup system: https://forum.proxmox.com/posts/499133/

Like Dunuin already said, you do not need an additional backup server at all. Using any kind of external/separate storage with vzdump would be enough. If you do those backups only manually, this storage does not even need to run 24/7.
 
  • Like
Reactions: r4dh4l
from the output I guess you're still using grub to directly boot from ZFS - such setups are in general _not_ safe to run zpool upgrade on!
the procedure to adapt this to use proxmox-boot-tool is described in the pve-wiki:
https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool

I hope this helps!


Essentially same situation here, but with 8.1, I notied zpool status saying

Code:
...
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
...

# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
7A7F-1E73 is configured with: uefi (versions: 6.2.16-19-pve, 6.2.16-3-pve, 6.5.11-6-pve)
7A81-D268 is configured with: uefi (versions: 6.2.16-19-pve, 6.2.16-3-pve, 6.5.11-6-pve)
7A83-7C66 is configured with: uefi (versions: 6.2.16-19-pve, 6.2.16-3-pve, 6.5.11-6-pve)

Would this be safe to zpool upgrade?

I have good backup's, but prefer not wasting several days with a restore....
 
Same question as rcd .... My boot tool status is similar indicating UEFI and I am on 8.1.3.

So is it safe to run zpool upgrade on the drive containing the boot partition. This is a single drive. No mirror or stripe.

/dev/sda
/dev/sda1 - BIOS boot
/dev/sda2 - EFI
/dev/sda3 - ZFS (rpool)

I would think the upgrade would only touch Partition 3 but would also like to confirm before I press the button.

Thanks in advance.
 
Ok, same question here and previous answers are not relevant to my situation. I've installed proxmox using proxmox-ve_8.0-2.iso and then made update/upgrade. Now zpool status suggests me to upgrade.
pve ~ # zpool status
pool: rpool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 00:00:09 with 0 errors on Sun Apr 14 00:24:10 2024
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-VBOX_HARDDISK_VBefb01e67-286c0a22-part3 ONLINE 0 0 0
ata-VBOX_HARDDISK_VBe078aa93-4c53d22b-part3 ONLINE 0 0 0

errors: No known data errors

Here what zpool upgrade suggests:

Bash:
pve ~ # zpool upgrade
This system supports ZFS pool feature flags.

All pools are formatted using feature flags.


Some supported features are not enabled on the following pools. Once a
feature is enabled the pool may become incompatible with software
that does not support the feature. See zpool-features(7) for details.

Note that the pool 'compatibility' feature can be used to inhibit
feature upgrades.

POOL  FEATURE
---------------
rpool
      zilsaxattr
      head_errlog
      blake3
      block_cloning
      vdev_zaps_v2

I've checked /usr/share/zfs/compatibility.d/grub2 and found that following features and in compatible with grub2:

* head_errlog
* blake3
* vdev_zaps_v2

Does this mean that it's not safe to make zpool upgrade on my system, since I see compatibility off for rpool?

pve ~ # proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
E231-DD93 is configured with: grub (versions: 6.2.16-20-pve, 6.2.16-3-pve, 6.5.13-5-pve)
E232-5756 is configured with: grub (versions: 6.2.16-20-pve, 6.2.16-3-pve, 6.5.13-5-pve)
pve ~ # zpool get all
NAME PROPERTY VALUE SOURCE
rpool size 19G -
rpool capacity 15% -
rpool altroot - default
rpool health ONLINE -
rpool guid 1860586259139404026 -
rpool version - default
rpool bootfs rpool/ROOT/pve-1 local
rpool delegation on default
rpool autoreplace off default
rpool cachefile - default
rpool failmode wait default
rpool listsnapshots off default
rpool autoexpand off default
rpool dedupratio 1.00x -
rpool free 16.1G -
rpool allocated 2.91G -
rpool readonly off -
rpool ashift 12 local
rpool comment - default
rpool expandsize - -
rpool freeing 0 -
rpool fragmentation 3% -
rpool leaked 0 -
rpool multihost off default
rpool checkpoint - -
rpool load_guid 442334605387297328 -
rpool autotrim off default
rpool compatibility off default
rpool bcloneused 0 -
rpool bclonesaved 0 -
rpool bcloneratio 1.00x -
rpool feature@async_destroy enabled local
rpool feature@empty_bpobj active local
rpool feature@lz4_compress active local
rpool feature@multi_vdev_crash_dump enabled local
rpool feature@spacemap_histogram active local
rpool feature@enabled_txg active local
rpool feature@hole_birth active local
rpool feature@extensible_dataset active local
rpool feature@embedded_data active local
rpool feature@bookmarks enabled local
rpool feature@filesystem_limits enabled local
rpool feature@large_blocks enabled local
rpool feature@large_dnode enabled local
rpool feature@sha512 enabled local
rpool feature@skein enabled local
rpool feature@edonr enabled local
rpool feature@userobj_accounting active local
rpool feature@encryption enabled local
rpool feature@project_quota active local
rpool feature@device_removal enabled local
rpool feature@obsolete_counts enabled local
rpool feature@zpool_checkpoint enabled local
rpool feature@spacemap_v2 active local
rpool feature@allocation_classes enabled local
rpool feature@resilver_defer enabled local
rpool feature@bookmark_v2 enabled local
rpool feature@redaction_bookmarks enabled local
rpool feature@redacted_datasets enabled local
rpool feature@bookmark_written enabled local
rpool feature@log_spacemap active local
rpool feature@livelist enabled local
rpool feature@device_rebuild enabled local
rpool feature@zstd_compress enabled local
rpool feature@draid enabled local
rpool feature@zilsaxattr disabled local
rpool feature@head_errlog disabled local
rpool feature@blake3 disabled local
rpool feature@block_cloning disabled local
rpool feature@vdev_zaps_v2 disabled local
 
So, I found what the documentation has to say on this: https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_zfs_features

Since my system is configured with grub, I think it is not safe to upgrade for new pool features.

Code:
pve ~ # proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
E231-DD93 is configured with: grub (versions: 6.2.16-20-pve, 6.2.16-3-pve, 6.5.13-5-pve)
E232-5756 is configured with: grub (versions: 6.2.16-20-pve, 6.2.16-3-pve, 6.5.13-5-pve)
 
Hmm - that's been quite a while - and in the meantime the partitioning scheme of the installer changed (a few times) - in order to comfortably use proxmox-boot-tool you need to have a free partition with a size of 512M (smaller might work as well - but you need to manually clear out some kernels every now and then) - since this partition stores the kernel-images and initrds (which can become quite large).
(the wiki-page I linked explains how to find these)


No - it's not strictly mandatory - however in my experience booting with grub directly from ZFS is quite fragile (we had a few reports where systems stopped working all of a sudden - and the reasons were quite hard to track down (grub-implementation of an old HP raid-card driver was not able to read past 2G,....)) - so I personally would prefer to use proxmox-boot-tool - additionally you don't have to worry that you render your system unbootable by accidentally upgrading your rpool.

If possible you could consider installing PVE freshly and then restoring your guests from a backup.
else - make sure you have a backup - and I assume if your system ran fine till now it will run after the upgrade as well

So, what you are saying is, if zfs is not used for boot, everything will be fine and zpool upgrade should work fine?

I have the zpool on 2 partitions outside of booting areas (nvme0n1p3/nvme1n1p3):

Code:
root@host06 ~ # lsblk -f
NAME                                                                                                    FSTYPE            FSVER    LABEL       UUID                                   FSAVAIL FSUSE% MOUNTPOINTS
zd0
├─zd0p1                                                                                                 ext4              1.0                  a11d269b-0f1b-4d9c-8889-6611ff91a207
└─zd0p2                                                                                                 LVM2_member       LVM2 001             jbc7CS-ScXa-frp1-KpOc-fOco-nBc7-OC7afm
nvme0n1
├─nvme0n1p1                                                                                             linux_raid_member 1.2      rescue:0    c5752e0a-5adc-5a17-2a42-992bb3e36eea
│ └─md0                                                                                                 ext4              1.0                  a92e9f3e-3fdf-4514-9ff0-f7ecfd73b934    646.7M    28% /boot
├─nvme0n1p2                                                                                             linux_raid_member 1.2      rescue:1    d8bbad5c-c45c-4c85-40d7-91ffa2159309
│ └─md1                                                                                                 LVM2_member       LVM2 001             j41p8y-G2Zr-LOhF-teWS-3hJd-In0f-tyCdFo
│   ├─vg0-root                                                                                          ext4              1.0                  3be2bb8b-8488-47b7-94dd-7b53c2c440e4      5.6G    38% /
│   └─vg0-var                                                                                           ext4              1.0                  beb1ebed-37e5-4c41-812e-09fc7d58bcd6     11.8G    14% /var
├─nvme0n1p3                                                                                             zfs_member        5000     zfspool_pve 13044891942853927059
└─nvme0n1p4                                                                                             LVM2_member       LVM2 001             k8VC7y-xLez-Zd1X-oABO-P9nd-1jOB-GbEQJB
  └─ceph--44e515d1--92ac--43bc--abc5--9f6746c9a721-osd--block--d2d7a5e7--f9aa--48bf--8b9a--3386fd74cbec ceph_bluestore
nvme1n1
├─nvme1n1p1                                                                                             linux_raid_member 1.2      rescue:0    c5752e0a-5adc-5a17-2a42-992bb3e36eea
│ └─md0                                                                                                 ext4              1.0                  a92e9f3e-3fdf-4514-9ff0-f7ecfd73b934    646.7M    28% /boot
├─nvme1n1p2                                                                                             linux_raid_member 1.2      rescue:1    d8bbad5c-c45c-4c85-40d7-91ffa2159309
│ └─md1                                                                                                 LVM2_member       LVM2 001             j41p8y-G2Zr-LOhF-teWS-3hJd-In0f-tyCdFo
│   ├─vg0-root                                                                                          ext4              1.0                  3be2bb8b-8488-47b7-94dd-7b53c2c440e4      5.6G    38% /
│   └─vg0-var                                                                                           ext4              1.0                  beb1ebed-37e5-4c41-812e-09fc7d58bcd6     11.8G    14% /var
├─nvme1n1p3                                                                                             zfs_member        5000     zfspool_pve 13044891942853927059
└─nvme1n1p4                                                                                             LVM2_member       LVM2 001             7vlHgN-U3e1-frUp-qIIl-xfca-KoWN-II3tVz
  └─ceph--e0d139df--f3e7--4fc4--8016--0699f2beeaea-osd--block--af982960--6246--4de0--8e3b--a827494dc1c3 ceph_bluestore
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!