Is it safe to upgrade a root ZFS pool with OpenZFS 2.0.3-pve1?

jsalas424

Member
Jul 5, 2020
141
2
23
34
I recently updated to PVE 6.3-4 which brought along with it OpenZFS. I now need to upgrade my ZFS pools, but I want to check that there are no contraindications for ZFS root configs.

Code:
  pool: Nextcloud.Storage
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
    still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 05:28:21 with 0 errors on Sun Feb 14 05:52:22 2021
config:

    NAME                        STATE     READ WRITE CKSUM
    Nextcloud.Storage           ONLINE       0     0     0
      mirror-0                  ONLINE       0     0     0
        wwn-0x5000c50064941e16  ONLINE       0     0     0
        wwn-0x5000c5006497492d  ONLINE       0     0     0

errors: No known data errors

  pool: Storage.1
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
    still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 03:18:05 with 0 errors on Sun Feb 14 03:42:08 2021
config:

    NAME                                  STATE     READ WRITE CKSUM
    Storage.1                             ONLINE       0     0     0
      mirror-0                            ONLINE       0     0     0
        wwn-0x500003956b800304            ONLINE       0     0     0
        ata-TOSHIBA_MG03ACA100_44N2KLTFF  ONLINE       0     0     0

errors: No known data errors

  pool: new_ssd
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
    still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 00:08:19 with 0 errors on Sun Feb 14 00:32:24 2021
config:

    NAME                                             STATE     READ WRITE CKSUM
    new_ssd                                          ONLINE       0     0     0
      mirror-0                                       ONLINE       0     0     0
        ata-Samsung_SSD_860_EVO_1TB_S5B3NDFNA02148D  ONLINE       0     0     0
        ata-Samsung_SSD_860_EVO_1TB_S5B3NDFN915923H  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
    still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 00:00:52 with 0 errors on Sun Feb 14 00:24:58 2021
config:

    NAME        STATE     READ WRITE CKSUM
    rpool       ONLINE       0     0     0
      sdb3      ONLINE       0     0     0
      sda2      ONLINE       0     0     0

errors: No known data errors
 
It's safe in general.
For now, I'd avoid enabling the new zstd for the root pool, especially if you boot with BIOS and not UEFI and an EFI System Part ion (setup since Proxmox VE 5.4 ISO installer).
 
It's safe in general.
For now, I'd avoid enabling the new zstd for the root pool, especially if you boot with BIOS and not UEFI and an EFI System Part ion (setup since Proxmox VE 5.4 ISO installer).
I'm fairly certain I used a VE 6.x installer, but what would be the right way to confirm this?
 
I did a fesh install of PVE 6.3 (UEFI, ZFS RAID1), then update/dist-upgrade, reboot, then I did zfs upgrade rpool :

Code:
root@x:~# zpool version
zfs-2.0.3-pve1
zfs-kmod-2.0.3-pve1
root@x:~# zpool upgrade
This system supports ZFS pool feature flags.

All pools are formatted using feature flags.


Some supported features are not enabled on the following pools. Once a
feature is enabled the pool may become incompatible with software
that does not support the feature. See zpool-features(5) for details.

POOL  FEATURE
---------------
rpool
      redaction_bookmarks
      redacted_datasets
      bookmark_written
      log_spacemap
      livelist
      device_rebuild
      zstd_compress


root@x:~# zpool status
  pool: rpool
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
    still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(5) for details.
config:

    NAME                                      STATE     READ WRITE CKSUM
    rpool                                     ONLINE       0     0     0
      mirror-0                                ONLINE       0     0     0
        ata-THNSF8200CAME_873S101VTCRT-part3  ONLINE       0     0     0
        ata-THNSF8200CAME_873S1021TCRT-part3  ONLINE       0     0     0

errors: No known data errors
root@x:~# zpool upgrade rpool
This system supports ZFS pool feature flags.

Enabled the following features on 'rpool':
  redaction_bookmarks
  redacted_datasets
  bookmark_written
  log_spacemap
  livelist
  device_rebuild
  zstd_compress

root@x:~# zpool status
  pool: rpool
state: ONLINE
config:

    NAME                                      STATE     READ WRITE CKSUM
    rpool                                     ONLINE       0     0     0
      mirror-0                                ONLINE       0     0     0
        ata-THNSF8200CAME_873S101VTCRT-part3  ONLINE       0     0     0
        ata-THNSF8200CAME_873S1021TCRT-part3  ONLINE       0     0     0

errors: No known data errors

Then I tried a reboot and no issue, machine went back up normally.
 
Does your system boot with GRUB or systemd-boot?
UEFI install so no GRUB, only systemd-boot AFAIK:

Code:
root@x:~# efibootmgr -v
BootCurrent: 0004
...
Boot0003* Linux Boot Manager    HD(2,GPT,755464f2-9b00-4f2d-9a81-98a455a69cc7,0x800,0x100000)/File(\EFI\systemd\systemd-bootx64.efi)
Boot0004* Linux Boot Manager    HD(2,GPT,1f9dc1a5-1bef-4149-b444-1dc9c9d12a66,0x800,0x100000)/File(\EFI\systemd\systemd-bootx64.efi)
root@x:~# ls -l /dev/disk/by-partuuid/
total 0
lrwxrwxrwx 1 root root 10 Mar  1 11:47 755464f2-9b00-4f2d-9a81-98a455a69cc7 -> ../../sda2
lrwxrwxrwx 1 root root 10 Mar  1 08:40 1f9dc1a5-1bef-4149-b444-1dc9c9d12a66 -> ../../sdb2
...
 
  • Like
Reactions: vesalius
Code:
root@pve:~# zpool status
  pool: HDD1
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 01:36:07 with 0 errors on Sun Jun 13 02:00:08 2021
config:

        NAME                                 STATE     READ WRITE CKSUM
        HDD1                                 ONLINE       0     0     0
          ata-ST16000NM001G-2KK103_ZL2A2SF7  ONLINE       0     0     0

errors: No known data errors

  pool: HDD2
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 08:26:37 with 0 errors on Sun Jun 13 08:50:41 2021
config:

        NAME                                 STATE     READ WRITE CKSUM
        HDD2                                 ONLINE       0     0     0
          ata-ST16000NM001G-2KK103_ZL29LGPK  ONLINE       0     0     0

errors: No known data errors

  pool: SSD
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 00:13:02 with 0 errors on Sun Jun 13 00:37:13 2021
config:

        NAME                              STATE     READ WRITE CKSUM
        SSD                               ONLINE       0     0     0
          nvme-Seagate_Firecuda_7MZ00PYC  ONLINE       0     0     0

errors: No known data errors

  pool: SSD2
 state: ONLINE
config:

        NAME                                        STATE     READ WRITE CKSUM
        SSD2                                        ONLINE       0     0     0
          nvme-Western_Digital_SN550E_20504M804945  ONLINE       0     0     0

errors: No known data errors
root@pve:~# efibootmgr -v
EFI variables are not supported on this system.

i installed proxmox 6.2 ,dont know should i do now,it is big problem if i dont upgrade zpool?
 
You do not have a ZFS pool named "rpool", which would be the name our installer uses, so it does not seem like you boot from ZFS at all?

To be sure, can you please also post the output of the findmnt command?
 
You do not have a ZFS pool named "rpool", which would be the name our installer uses, so it does not seem like you boot from ZFS at all?

To be sure, can you please also post the output of the findmnt command?
Code:
root@pve:~# findmnt
TARGET                           SOURCE     FSTYPE    OPTIONS
/                                /dev/mapper/pve-root
│                                           ext4      rw,relatime,errors=remount-ro
├─/sys                           sysfs      sysfs     rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/security         securityfs securityf rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/cgroup               tmpfs      tmpfs     ro,nosuid,nodev,noexec,mode=755
│ │ ├─/sys/fs/cgroup/unified     cgroup2    cgroup2   rw,nosuid,nodev,noexec,relatime
│ │ ├─/sys/fs/cgroup/systemd     cgroup     cgroup    rw,nosuid,nodev,noexec,relatime,xattr,name=sys
│ │ ├─/sys/fs/cgroup/rdma        cgroup     cgroup    rw,nosuid,nodev,noexec,relatime,rdma
│ │ ├─/sys/fs/cgroup/cpuset      cgroup     cgroup    rw,nosuid,nodev,noexec,relatime,cpuset
│ │ ├─/sys/fs/cgroup/net_cls,net_prio
│ │ │                            cgroup     cgroup    rw,nosuid,nodev,noexec,relatime,net_cls,net_pr
│ │ ├─/sys/fs/cgroup/freezer     cgroup     cgroup    rw,nosuid,nodev,noexec,relatime,freezer
│ │ ├─/sys/fs/cgroup/blkio       cgroup     cgroup    rw,nosuid,nodev,noexec,relatime,blkio
│ │ ├─/sys/fs/cgroup/cpu,cpuacct cgroup     cgroup    rw,nosuid,nodev,noexec,relatime,cpu,cpuacct
│ │ ├─/sys/fs/cgroup/pids        cgroup     cgroup    rw,nosuid,nodev,noexec,relatime,pids
│ │ ├─/sys/fs/cgroup/devices     cgroup     cgroup    rw,nosuid,nodev,noexec,relatime,devices
│ │ ├─/sys/fs/cgroup/hugetlb     cgroup     cgroup    rw,nosuid,nodev,noexec,relatime,hugetlb
│ │ ├─/sys/fs/cgroup/memory      cgroup     cgroup    rw,nosuid,nodev,noexec,relatime,memory
│ │ └─/sys/fs/cgroup/perf_event  cgroup     cgroup    rw,nosuid,nodev,noexec,relatime,perf_event
│ ├─/sys/fs/pstore               pstore     pstore    rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/bpf                  none       bpf       rw,nosuid,nodev,noexec,relatime,mode=700
│ ├─/sys/kernel/debug            debugfs    debugfs   rw,relatime
│ ├─/sys/fs/fuse/connections     fusectl    fusectl   rw,relatime
│ └─/sys/kernel/config           configfs   configfs  rw,relatime
├─/proc                          proc       proc      rw,relatime
│ └─/proc/sys/fs/binfmt_misc     systemd-1  autofs    rw,relatime,fd=26,pgrp=1,timeout=0,minproto=5,
├─/dev                           udev       devtmpfs  rw,nosuid,relatime,size=115387172k,nr_inodes=2
│ ├─/dev/pts                     devpts     devpts    rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxm
│ ├─/dev/shm                     tmpfs      tmpfs     rw,nosuid,nodev
│ ├─/dev/mqueue                  mqueue     mqueue    rw,relatime
│ └─/dev/hugepages               hugetlbfs  hugetlbfs rw,relatime,pagesize=2M
├─/run                           tmpfs      tmpfs     rw,nosuid,noexec,relatime,size=23089604k,mode=
│ ├─/run/lock                    tmpfs      tmpfs     rw,nosuid,nodev,noexec,relatime,size=5120k
│ ├─/run/rpc_pipefs              sunrpc     rpc_pipef rw,relatime
│ └─/run/user/0                  tmpfs      tmpfs     rw,nosuid,nodev,relatime,size=23089600k,mode=7
├─/SSD2                          SSD2       zfs       rw,xattr,noacl
├─/SSD                           SSD        zfs       rw,xattr,noacl
├─/HDD2                          HDD2       zfs       rw,xattr,noacl
├─/HDD1                          HDD1       zfs       rw,xattr,noacl
├─/var/lib/lxcfs                 lxcfs      fuse.lxcf rw,nosuid,nodev,relatime,user_id=0,group_id=0,
└─/etc/pve                       /dev/fuse  fuse      rw,nosuid,nodev,relatime,user_id=0,group_id=0,
root@pve:~#
 
/ /dev/mapper/pve-root
│ ext4 rw,relatime,errors=remount-r
Yeah, you boot from LVM + ext4, so you can safely upgrade the ZFS pools on that system, as they won't have any impact on the boot process at all.
 
  • Like
Reactions: franko5
Hello I've just upgraded all zpool, did not yet reboot and just saw this thread. Am I going to have any issue rebooting?
As far I understand I will not, but better touch base before rebooting.

Code:
:~# efibootmgr -v
EFI variables are not supported on this system.

:~#zpool version
zfs-2.0.7-pve1
zfs-kmod-2.0.7-pve1

:~#findmnt
TARGET                                SOURCE                         FSTYPE     OPTIONS
/                                     rpool/ROOT/pve-1               zfs        rw,relatime,xattr,noacl

:~#zpool get feature@large_dnode
NAME         PROPERTY             VALUE                SOURCE
lw1-bigtank  feature@large_dnode  enabled              local
lw1-tank     feature@large_dnode  active               local
rpool        feature@large_dnode  enabled              local

:~# zpool get feature@zstd_compress
NAME         PROPERTY               VALUE                  SOURCE
lw1-bigtank  feature@zstd_compress  enabled                local
lw1-tank     feature@zstd_compress  enabled                local
rpool        feature@zstd_compress  enabled                local

Cheers
 
Hello I've just upgraded all zpool, did not yet reboot and just saw this thread. Am I going to have any issue rebooting?
As far I understand I will not, but better touch base before rebooting.
from the outputs you posted it's not possible to tell if you're completely safe

the output of `proxmox-boot-tool status` should provide the needed information

additionally please also post the output of `lsblk`

if you're not using proxmox-boot-tool and don't have the kernel installed on vfat - this could be/become problematic

rpool does not seem to use the 2 features (although you should check all features which are not READ-ONLY Compatible - see zpool-features(5))
 
from the outputs you posted it's not possible to tell if you're completely safe

the output of `proxmox-boot-tool status` should provide the needed information

additionally please also post the output of `lsblk`

if you're not using proxmox-boot-tool and don't have the kernel installed on vfat - this could be/become problematic

rpool does not seem to use the 2 features (although you should check all features which are not READ-ONLY Compatible - see zpool-features(5))

Thank you Stoiko,


Bash:
:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
10FE-6209 is configured with: uefi (versions: 5.3.18-3-pve, 5.4.106-1-pve, 5.4.78-2-pve)
10FE-A056 is configured with: uefi (versions: 5.3.18-3-pve, 5.4.106-1-pve, 5.4.78-2-pve)

Bash:
:~# lsblk
NAME     MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda        8:0    0 931.5G  0 disk 
├─sda1     8:1    0  1007K  0 part 
├─sda2     8:2    0   512M  0 part 
└─sda3     8:3    0 930.5G  0 part 
sdb        8:16   0 931.5G  0 disk 
├─sdb1     8:17   0  1007K  0 part 
├─sdb2     8:18   0   512M  0 part 
└─sdb3     8:19   0 930.5G  0 part 
sdc        8:32   0   3.7T  0 disk 
├─sdc1     8:33   0   3.7T  0 part 
└─sdc9     8:41   0     8M  0 part 
sdd        8:48   0   3.7T  0 disk 
├─sdd1     8:49   0   3.7T  0 part 
└─sdd9     8:57   0     8M  0 part 
sde        8:64   0   3.7T  0 disk 
├─sde1     8:65   0   3.7T  0 part 
└─sde9     8:73   0     8M  0 part 
sdf        8:80   0 931.5G  0 disk 
├─sdf1     8:81   0 931.5G  0 part 
└─sdf9     8:89   0     8M  0 part 
sdg        8:96   0 931.5G  0 disk 
├─sdg1     8:97   0 931.5G  0 part 
└─sdg9     8:105  0     8M  0 part 
sdh        8:112  0 931.5G  0 disk 
├─sdh1     8:113  0 931.5G  0 part 
└─sdh9     8:121  0     8M  0 part 
sdi        8:128  0 931.5G  0 disk 
├─sdi1     8:129  0 931.5G  0 part 
└─sdi9     8:137  0     8M  0 part 
sdj        8:144  0 931.5G  0 disk 
├─sdj1     8:145  0 931.5G  0 part 
└─sdj9     8:153  0     8M  0 part 
sdk        8:160  0 931.5G  0 disk 
├─sdk1     8:161  0 931.5G  0 part 
└─sdk9     8:169  0     8M  0 part 
sdl        8:176  0 931.5G  0 disk 
├─sdl1     8:177  0 931.5G  0 part 
└─sdl9     8:185  0     8M  0 part 
sr0       11:0    1  1024M  0 rom  
zd0      230:0    0    32G  0 disk 
└─zd0p1  230:1    0    32G  0 part 
zd16     230:16   0   6.9T  0 disk 
└─zd16p1 230:17   0   6.9T  0 part 
zd32     230:32   0    32G  0 disk 
├─zd32p1 230:33   0    32G  0 part 
├─zd32p5 230:37   0    30G  0 part 
└─zd32p6 230:38   0   1.6G  0 part 
zd48     230:48   0   256G  0 disk 
├─zd48p1 230:49   0  23.3G  0 part 
├─zd48p2 230:50   0     1K  0 part 
├─zd48p5 230:53   0   9.3G  0 part 
├─zd48p6 230:54   0    12G  0 part 
├─zd48p7 230:55   0   1.9G  0 part 
└─zd48p8 230:56   0 209.6G  0 part
 
Bash:
proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
10FE-6209 is configured with: uefi (versions: 5.3.18-3-pve, 5.4.106-1-pve, 5.4.78-2-pve)
10FE-A056 is configured with: uefi (versions: 5.3.18-3-pve, 5.4.106-1-pve, 5.4.78-2-pve)
That's good - you do have 2 vfat partitions on the 2 disks of rpool (I would guess the system was setup with PVE 6.1-6.3 ISO?)

a) make still sure you have a working backup of all important data ( this is always a prerequisite before doing something that might render your system unbootable )
b) https://pve.proxmox.com/wiki/ZFS:_Switch_Legacy-Boot_to_Proxmox_Boot_Tool should contain all the steps needed to switch over to booting from those 2 vfat partitions (and thus avoiding any issues with grub on zfs

I hope this helps!
 
Thank you! I'm reading the documentation right now and according to this
Bash:
:~# lsblk -o +FSTYPE >lsblk
NAME     MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT FSTYPE
sda        8:0    0 931.5G  0 disk            
├─sda1     8:1    0  1007K  0 part            
├─sda2     8:2    0   512M  0 part            vfat
└─sda3     8:3    0 930.5G  0 part            zfs_member
sdb        8:16   0 931.5G  0 disk            
├─sdb1     8:17   0  1007K  0 part            
├─sdb2     8:18   0   512M  0 part            vfat
└─sdb3     8:19   0 930.5G  0 part            zfs_member

my target partitions are sda2 and sdb2 and basically this is all I need to do:
Bash:
:~# proxmox-boot-tool format /dev/sda2
:~# proxmox-boot-tool format /dev/sdb2
:~# proxmox-boot-tool init /dev/sda2
:~# proxmox-boot-tool init /dev/sdb2

and if any UUID warning proxmox-boot-tool clean

If this is correct, I'll go ahead.

A while ago there it was a thread about a config backup script, would you be able also to give me some hints about the data's backup.

Cheers
 
my target partitions are sda2 and sdb2 and basically this is all I need to do:
yes - but since the partitions are already formatted it should be enough to just run `proxmox-boot-tool init`
* if this errors out you can format with `proxmox-boot-tool format /dev/sdX2 --force`

make sure to read the output and watch for any warnings or errors


A while ago there it was a thread about a config backup script, would you be able also to give me some hints about the data's backup.
I'm always a bit hesitant to tell people what they need to backup - since I don't know their systems - In other words - only you can know what's important on the system for you

As a starting point I would say:
* all guests - use the GUI to create backups
* dpkg --get-selections (gives you a list of all installed packages)
* the contents of:
** /etc
** /home
** /root

but once again - this might miss important things - e.g. if you've manually compiled software - or installed things that are not from debian's upstream or PVE repositories

I hope this helps!
 
  • Like
Reactions: MarcoP

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!