How to restore ZFS pool after reboot one disk is FAULTED

Serhioromano

Member
Jun 12, 2023
30
7
8
I have rebooted PVE and my pool was gone. I imported it with `zpool import tank`. But after that I have

Code:
root@pve:~# zpool status tank
  pool: tank
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: resilvered 4.18M in 00:00:01 with 0 errors on Fri Jun 23 12:16:50 2023
config:

        NAME                                      STATE     READ WRITE CKSUM
        tank                                      DEGRADED     0     0     0
          raidz2-0                                DEGRADED     0     0     0
            sdb                                   ONLINE       0     0     0
            sdc                                   ONLINE       0     0     0
            usb-TOSHIBA_HDWT860_000000123AE8-0:0  FAULTED      0     0     0  corrupted data
            sde                                   ONLINE       0     0     0
            sdf                                   ONLINE       0     0     1

One disk is corrupted. But disk is OK. How to fix it? I have Z2 pool so it have to be ok to restore data.
 
You have corruption on two disks, be careful
Why on 2? I see only one disk fail. You mean CKSUM column have 1 for sdf?

Try running zpool clear tank to reset the error counters. But also do a zpool scrub tank to confirm that all files are intact.
I did that but it did not help. I also did a scrub it did not change pool state. Right now I did this.

Bash:
zpool labelclear -f usb-TOSHIBA_HDWT860_000000123AE8-0:0

and then

Bash:
zpool replace tank -f usb-TOSHIBA_HDWT860_000000123AE8-0:0 sdd

Now it is resilvering.

Bash:
root@pve:~# zpool status -v tank
  pool: tank
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Fri Jun 23 13:02:02 2023
        2.73T scanned at 5.31G/s, 186G issued at 362M/s, 3.96T total
        37.1G resilvered, 4.59% done, 03:02:23 to go
config:

        NAME                                        STATE     READ WRITE CKSUM
        tank                                        DEGRADED     0     0     0
          raidz2-0                                  DEGRADED     0     0     0
            sdb                                     REMOVED      0     0     0
            sdc                                     ONLINE       0     0     0
            replacing-2                             DEGRADED     0     0     0
              usb-TOSHIBA_HDWT860_000000123AE8-0:0  FAULTED      0     0     0  corrupted data
              sdd                                   ONLINE       0     0     0  (resilvering)
            sde                                     ONLINE       0     0     0
            sdf                                     ONLINE       0     0     0
 
  • Like
Reactions: pascalbrax
oh ... usb ... yes, I also tried that in the past and it does not work properly on usb disks/sticks. I don't know what exactly triggered the problems, but often it was varying response times that yielded to checksum errors. I abandoned ZFS over USB and those disks never failed again.
 
oh ... usb ... yes, I also tried that in the past and it does not work properly on usb disks/sticks. I don't know what exactly triggered the problems, but often it was varying response times that yielded to checksum errors. I abandoned ZFS over USB and those disks never failed again.
What do you use instead now?
I am using Yottamaster 5-Bay USB Type-C enclouser
 
Last edited: