Unsupported feature (s) vdev_zaps_v2, Cannot import rpool after upgrade

aychprox

Renowned Member
Oct 27, 2015
80
7
73
Hi all,

Today trying to upgrade one node from pve-manager/8.3.5 to latest release.
Unfortunately the node unable to start due to zfs issue.

This pool uses the following feature(s) not supported by this system:​
com.klarasystems: vdev_zaps_v2​
cannot import 'rpool': unsupported version or feature​
Not sure why after upgrade the ZFS still remain at 2.1.15-pve1 and zfs-kmod-2.1.14-pve1.
Need guide/steps on how to recover the system from this situation.....

Thanks in advance.
 
Hi, I know I'm probably way late to your problem... but I'm also posting this for if someone else has had this issue as well...

for me the Solution ended up involving:
  1. Boot into Proxmox from a Live install media, then once you are on the screen where it is asking to accept, press ctrl+alt+F1
  2. mount the offline system and chroot into proxmox:
    make sure you identify your drives properly!
    Bash:
    lsblk -f | grep -v zd
    
    # My outpu looked something like this:
    NAME     FSTYPE     FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
    sda                                                                                
    ├─sda1                                                                             
    ├─sda2   vfat       FAT32       F9E4-5966                                          
    └─sda3   zfs_member 5000  rpool 16360771866814346526                               
    sdb                                                                                
    ├─sdb1                                                                             
    ├─sdb2   vfat       FAT32       7C33-1416                                          
    └─sdb3   zfs_member 5000  rpool 16360771866814346526
    1. Mount the system into
      Code:
      /target
      and chroot into it:
      Bash:
      zpool import -N -R /target rpool -f
      # you could check with a zpool status, from what my output indicated:
      zfs mount rpool/ROOT/pve-1
      mount -t proc proc /target/proc
      mount -t sysfs sys /target/sys
      mount -o bind /dev /target/dev
      mount -o bind /run /target/run
      
      ## I also went ahead and enabled eufi for grub because I was planning on switching over to grub:
      mount -o bind /sys/firmware/efi/efivars /target/sys/firmware/efi/efivars 2>/dev/null || true  # optional if UEFI
      # Here make sure you are using your drive (sda, or sdb, or nvme0p1:
      mount -t vfat /dev/sdX2 /target/boot/efi

    2. Chroot into the system:
      Bash:
      chroot /target /bin/bash
      
      # Optionally you could mount /etc/pve,
        #   but first for that you need to know your original hostname and change to it, for me:
           hostname pcs-pve0
        # mount the proxmox etc/pve database in local mode
           pmxcfs -l

    3. Once in the target, I installed Kernel 6.11
      Bash:
      apt update
      apt install proxmox-kernel-6.11

      Try Rebooting at this point...

    4. but if you want an extra measure you could also refresh the boot environment:
      Bash:
      proxmox-boot-tool refresh
      
      # and check with 
      proxmox-boot-tool status
I ended up doing more to mine because I was also rebuilding a mirrored raid... however this should do the trick...
Please post if this is insufficient...
 
Last edited: