zfs issue after power outage

olistr

New Member
May 2, 2023
1
0
1
Hi,
my server rebooted after a power outage, and now I am unable to get one of the zfs pools online again.
I went through alot of google searches and tried alot but without success.
Bash:
root@pve:/etc/apt# zfs version
zfs-2.2.3-pve2
zfs-kmod-2.2.2-pve1

root@pve:/etc/apt# dpkg-query -l | grep zfs
ii  libzfs4linux                         2.2.3-pve2                          amd64        OpenZFS filesystem library for Linux - general support
rc  zfs-auto-snapshot                    1.2.4-2                             all          ZFS automatic snapshot service
ii  zfs-dkms                             2.1.11-1                            all          OpenZFS filesystem kernel modules for Linux
ii  zfs-initramfs                        2.2.3-pve2                          all          OpenZFS root filesystem capabilities for Linux - initramfs
ii  zfs-zed                              2.2.3-pve2                          amd64        OpenZFS Event Daemon
ii  zfsutils-linux                       2.2.3-pve2                          amd64        command-line tools to manage OpenZFS filesystems

root@pve:/etc/apt# pveversion 
pve-manager/8.2.2/9355359cd7afbae4 (running kernel: 6.5.11-7-pve)

So I have 3 pools, rpool, USB_Pool, ZFS_pool

Bash:
root@pve:/etc/apt# zpool status
  pool: USB_Pool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
    The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 04:49:05 with 0 errors on Sun Apr 14 05:13:07 2024
config:

    NAME                                                STATE     READ WRITE CKSUM
    USB_Pool                                            ONLINE       0     0     0
      usb-Samsung_Flash_Drive_FIT_0360720120001360-0:0  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
    The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:00:17 with 0 errors on Sun Apr 14 00:24:21 2024
config:

    NAME                               STATE     READ WRITE CKSUM
    rpool                              ONLINE       0     0     0
      nvme-eui.002538d121b79d5f-part3  ONLINE       0     0     0

errors: No known data errors

Import doesn't work
Bash:
root@pve:/etc/apt# zpool import -f ZFS_Pool
cannot import 'ZFS_Pool': no such pool available

zdb -e ZFS_Pool returns

Bash:
root@pve:/etc/apt# zdb -e ZFS_Pool

Configuration for import:
        vdev_children: 2
        version: 5000
        pool_guid: 2002659028439309986
        name: 'ZFS_Pool'
        state: 0
        hostid: 4141524589
        hostname: 'pve'
        vdev_tree:
            type: 'root'
            id: 0
            guid: 2002659028439309986
            children[0]:
                type: 'raidz'
                id: 0
                guid: 3125726144081857236
                nparity: 1
                metaslab_array: 47
                metaslab_shift: 37
                ashift: 12
                asize: 23991808425984
                is_log: 0
                create_txg: 4
                children[0]:
                    type: 'disk'
                    id: 0
                    guid: 5847275254324609412
                    whole_disk: 1
                    DTL: 3087
                    create_txg: 4
                    path: '/dev/disk/by-id/wwn-0x50014ee2617e6df5-part2'
                    devid: 'ata-WDC_WD4001FFSX-68JNUN0_WD-WCC5D6RZU73F-part2'
                    phys_path: 'pci-0000:00:17.0-ata-3.0'
                children[1]:
                    type: 'disk'
                    id: 1
                    guid: 4255501890939767730
                    whole_disk: 1
                    DTL: 259
                    create_txg: 4
                    faulted: 1
                    path: '/dev/disk/by-id/wwn-0x50014ee003ec0f92-part1'
                    devid: 'ata-WDC_WD4001FFSX-68JNUN0_WD-WMC5D0D0HRLP-part1'
                    phys_path: 'pci-0000:00:17.0-ata-4.0'
                children[2]:
                    type: 'disk'
                    id: 2
                    guid: 10502238400137545044
                    path: '/dev/disk/by-id/wwn-0x50014ee2b8119cf8-part2'
                    phys_path: 'id1,enc@n3061686369656d30/type@0/slot@5/elmdesc@Slot_04/p2'
                    whole_disk: 1
                    DTL: 3085
                    create_txg: 4
                children[3]:
                    type: 'disk'
                    id: 3
                    guid: 14672757594293550404
                    whole_disk: 1
                    DTL: 3084
                    create_txg: 4
                    path: '/dev/disk/by-id/wwn-0x50014ee2097aec88-part2'
                    devid: 'ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E0548163-part2'
                    phys_path: 'pci-0000:00:17.0-ata-6.0'
                children[4]:
                    type: 'disk'
                    id: 4
                    guid: 2269693770058718119
                    whole_disk: 1
                    DTL: 3083
                    create_txg: 4
                    path: '/dev/disk/by-id/wwn-0x50014ee20d61bf34-part2'
                    devid: 'ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E3YFJP8K-part2'
                    phys_path: 'pci-0000:00:17.0-ata-5.0'
                children[5]:
                    type: 'disk'
                    id: 5
                    guid: 12207196163784016248
                    whole_disk: 1
                    DTL: 3082
                    create_txg: 4
                    path: '/dev/disk/by-id/wwn-0x50014ee20a533d88-part2'
                    devid: 'ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E2031312-part2'
                    phys_path: 'pci-0000:00:17.0-ata-8.0'
            children[1]:
                type: 'disk'
                id: 1
                guid: 449581483846734873
                whole_disk: 1
                metaslab_array: 46
                metaslab_shift: 31
                ashift: 12
                asize: 256055705600
                is_log: 1
                DTL: 3081
                create_txg: 4
                path: '/dev/disk/by-id/ata-KINGSTON_SNV325S2_Y9KS101ZT74Z-part1'
                devid: 'ata-KINGSTON_SNV325S2_Y9KS101ZT74Z-part1'
                phys_path: 'pci-0000:00:17.0-ata-2.0'
        load-policy:
            load-request-txg: 18446744073709551615
            load-rewind-policy: 2
zdb: can't open 'ZFS_Pool': Input/output error


Bash:
root@pve:/etc/apt# zpool import -d /dev/disk/by-id -f ZFS_Pool
cannot import 'ZFS_Pool': no such pool available
root@pve:/etc/apt# zpool import -d /dev/disk/by-id -f ZFS_Pool
cannot import 'ZFS_Pool': I/O error
    Destroy and re-create the pool from
    a backup source.

zpool import d /de/disks/by-id sometimes returns output, sometimes doesnt.
Bash:
root@pve:/etc/apt# zpool import -d /dev/disk/by-id
no pools available to import
root@pve:/etc/apt# zpool import -d /dev/disk/by-id
   pool: ZFS_Pool
     id: 2002659028439309986
  state: FAULTED
status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
    The pool may be active on another system, but can be imported using
    the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
 config:

    ZFS_Pool                              FAULTED  corrupted data
      raidz1-0                            DEGRADED
        wwn-0x50014ee2617e6df5            ONLINE
        wwn-0x50014ee003ec0f92            ONLINE
        wwn-0x50014ee2b8119cf8            UNAVAIL
        wwn-0x50014ee2097aec88            ONLINE
        wwn-0x50014ee20d61bf34            ONLINE
        wwn-0x50014ee20a533d88            ONLINE
    logs   
      ata-KINGSTON_SNV325S2_Y9KS101ZT74Z  ONLINE

So I see that wwn-0x50014ee2b8119cf8 is unavailable, it seems it is totally broken as even lsblk does not recognise it.

however I should be able to get the pool online with one disk unavailable, no? at least in degraded state?

How can I get the zpool online with 1 missing disk?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!