ZFS pool failure after upgrading the features

wrobelda

Member
Apr 13, 2022
57
4
13
I belive I am seeing a similar issue with Proxmox 9 and my RAID1 setup to the one reported upstream: https://github.com/openzfs/zfs/issues/17090

I was running flawless until about a week ago, when I updated the pool to add all current features.

I am now unable to import the pool:

root@proxmox:~# zpool import storage -f
cannot import 'storage': insufficient replicas
Destroy and re-create the pool from
a backup source.


When imported in R/O mode, it shows:

root@proxmox:~# zpool status storage -v
pool: storage
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: resilvered 6.57M in 00:00:01 with 0 errors on Mon Sep 15 14:11:00 2025
config:

NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ST16000NM001G-2KK103_ZL2DG3AF_crypt ONLINE 0 0 0
ST16000NM001G-2KK103_ZL2DJKSN_crypt ONLINE 0 0 0

errors: Permanent errors have been detected in the following files:

<metadata>:<0x0>


Recovery fails as well:

root@proxmox:~# zpool import -fF storage
cannot import 'storage': insufficient replicas
Destroy and re-create the pool from
a backup source.


Disks are sound, they're new with intact SMART. This really upsetting, to be honest.
 
The remaining question is who or what corrupt the zfs metadata on both disks if it's unable to recover from mirror if it wasn't zfs itself ?