ZFS Pool degraded, device unavailable

nigi

Well-Known Member
Jan 1, 2017
30
1
48
40
Hi,

I'm a beginner with ZFS and need some help.
I've got a pool, which contains 4 disks (originally). Today I've seen the following status:
Code:
root@vhost:~# zpool status -L storage_2
  pool: storage_2
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
    invalid.  Sufficient replicas exist for the pool to continue
    functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-4J
  scan: resilvered 249G in 2h4m with 0 errors on Thu Mar 16 17:07:39 2017
config:

    NAME                    STATE     READ WRITE CKSUM
    storage_2               DEGRADED     0     0     0
      raidz1-0              DEGRADED     0     0     0
        sdc                 ONLINE       0     0     0
        sdd                 ONLINE       0     0     0
        sde                 ONLINE       0     0     0
        890498573587979709  UNAVAIL      0     0     0  was /dev/sdg1

errors: No known data errors

Of course, the unavailable device is still on power and fully ok. But it looks like that it is /dev/sdf, now, not sdg. sdf is absolutely empty and don't contain any partition.
But nevertheless I can't replace it.

Code:
root@vhost:~# zpool replace storage_2 /dev/sdg1 sdf
invalid vdev specification
use '-f' to override the following errors:
/dev/sdf1 is part of active pool 'storage_2'
root@vhost:~# zpool replace -f storage_2 /dev/sdg1 sdf
invalid vdev specification
the following errors must be manually repaired:
/dev/sdf1 is part of active pool 'storage_2'

Can anybody tell me, what's wrong?
Thank you!
nigi
 
How are you determining that (now) sdf has no partitions? Because according to zpool it does and is part of storage_2. It might just be that the disk took too long during boot for it to be imported by zfs. Perhaps a simple reboot can fix things.
 
Hi pabernethy,
I've looked with fdisk. The HDD is completely empty. But this may well be happend during some reconstruction in the same way as /dev/sdg had been moved to /dev/sdf.
Is there an easy way to remove the disk and insert it again?

Kind regards,
nigi