ZFS pool disappeared

nfprox

New Member
Oct 31, 2025
9
0
1
Before reboot, there was an I/O failure reported. I attempted to fix the pool with: zpool clear <pool> but the system hung, so I rebooted after about 30 minutes.

I've attached relevant logs that I could find since I noticed the I/O failure. Only one drive (sdd) is showing errors.

All drives are still there, shown in LSBLK screenshot. The pool in question was constructed in (I believe) RAID 5, as the three disks were all in use and the total storage was capacity of two.

Is there any way for me to revive the zpool?
 

Attachments

  • relevantlogs.txt
    relevantlogs.txt
    2.3 KB · Views: 7
  • Screenshot 2025-10-30 223442.png
    Screenshot 2025-10-30 223442.png
    58.6 KB · Views: 19
as you can see in the log, zfs cannot access one of the hard drive that is needed for the raid5 to get started. i hope you have current backups of your vms or this is a non production environment.
 
  • Like
Reactions: nfprox
In your screenshot zpool cmd is uncomplete but shows a striped pool without redundancy, so with 1 errored disk out it's gone.
--> New pool -> restore from backup.
 
  • Like
Reactions: nfprox
In your screenshot zpool cmd is uncomplete but shows a striped pool without redundancy, so with 1 errored disk out it's gone.
--> New pool -> restore from backup.
Are you referring to the zpool status command? That's showing a mounted external harddrive pool, the unreachable pool does not show up in status
 
Hi. Isn't raid5 purpose to survive a failure of one drive? Why (whether?) can't this pool be started?
If waltar is correct, it sounds like I misconfigured the cluster. That was what I thought I achieved with that configuration.
 
Looking more closely (since I didn't document much while bringing this online years ago), this was a RAID10 array. /sda -> /sdd were used for this pool + redundancy. /sde is no longer mounted. /sdf is the external backup.
 
Looking more closely (since I didn't document much while bringing this online years ago), this was a RAID10 array. /sda -> /sdd were used for this pool + redundancy. /sde is no longer mounted. /sdf is the external backup.
1761930492512.png sdd GPT is not available
 
In some ways ZFS with its various black special features - like this - is a bit too cumbersome for me to find a effective use case to.
lsblk find the partions 1+9 and ZFS stores 4 labels, 2 at the beginning of a device, 2 at the end. When they are all corrupted a pool cannot be imported.
How could that happen here ? Maybe one of the zfs enthusiasts help you further.
 
  • Like
Reactions: nfprox
In some ways ZFS with its various black special features - like this - is a bit too cumbersome for me to find a effective use case to.
lsblk find the partions 1+9 and ZFS stores 4 labels, 2 at the beginning of a device, 2 at the end. When they are all corrupted a pool cannot be imported.
How could that happen here ? Maybe one of the zfs enthusiasts help you further.
Thanks for your help so far waltar!