ZFS pool disappeared

nfprox

New Member
Oct 31, 2025
12
0
1
Before reboot, there was an I/O failure reported. I attempted to fix the pool with: zpool clear <pool> but the system hung, so I rebooted after about 30 minutes.

I've attached relevant logs that I could find since I noticed the I/O failure. Only one drive (sdd) is showing errors.

All drives are still there, shown in LSBLK screenshot. The pool in question was constructed in (I believe) RAID 5, as the three disks were all in use and the total storage was capacity of two.

Is there any way for me to revive the zpool?
 

Attachments

  • relevantlogs.txt
    relevantlogs.txt
    2.3 KB · Views: 11
  • Screenshot 2025-10-30 223442.png
    Screenshot 2025-10-30 223442.png
    58.6 KB · Views: 29
In your screenshot zpool cmd is uncomplete but shows a striped pool without redundancy, so with 1 errored disk out it's gone.
--> New pool -> restore from backup.
 
  • Like
Reactions: nfprox
In your screenshot zpool cmd is uncomplete but shows a striped pool without redundancy, so with 1 errored disk out it's gone.
--> New pool -> restore from backup.
Are you referring to the zpool status command? That's showing a mounted external harddrive pool, the unreachable pool does not show up in status
 
Hi. Isn't raid5 purpose to survive a failure of one drive? Why (whether?) can't this pool be started?
If waltar is correct, it sounds like I misconfigured the cluster. That was what I thought I achieved with that configuration.
 
Show the zpool status of your 4 disk pool.
1761927808254.png

It looks like there's only one pool recognizable by the system (the external backup). The one I would like to restore doesn't appear with zpool status or zpool list.
 
Looking more closely (since I didn't document much while bringing this online years ago), this was a RAID10 array. /sda -> /sdd were used for this pool + redundancy. /sde is no longer mounted. /sdf is the external backup.
 
Looking more closely (since I didn't document much while bringing this online years ago), this was a RAID10 array. /sda -> /sdd were used for this pool + redundancy. /sde is no longer mounted. /sdf is the external backup.
1761930492512.png sdd GPT is not available
 
In some ways ZFS with its various black special features - like this - is a bit too cumbersome for me to find a effective use case to.
lsblk find the partions 1+9 and ZFS stores 4 labels, 2 at the beginning of a device, 2 at the end. When they are all corrupted a pool cannot be imported.
How could that happen here ? Maybe one of the zfs enthusiasts help you further.
 
  • Like
Reactions: nfprox
In some ways ZFS with its various black special features - like this - is a bit too cumbersome for me to find a effective use case to.
lsblk find the partions 1+9 and ZFS stores 4 labels, 2 at the beginning of a device, 2 at the end. When they are all corrupted a pool cannot be imported.
How could that happen here ? Maybe one of the zfs enthusiasts help you further.
Thanks for your help so far waltar!
 
Unfortunately, additional research over the weekend did not bear fruit.

At this stage, my next steps are as follows:

1. Restore latest backup to different array of drives.
2. Hold onto failed array for additional recovery attempts should I stumble across any. (Probably send to data recovery firm in a few months or so).


This event also exposed a few flaws in my backup strategy. Fortunately, no immediately important data was lost, but there are still improvements to be made!

Thank you all for your help.