ZFS Faulted Drive. Looking for help.

Gecko

Active Member
Apr 10, 2018
18
1
43
45
Colorado
My server was rebooted and one of the ZFS pools did not import automatically. I did so manually, and was greeted with this status:

Code:
root@proxmox04:~# zpool status
  pool: BackupPool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-4J
  scan: scrub repaired 0B in 0 days 06:33:45 with 0 errors on Sun Oct 13 06:57:46 2019
config:

        NAME                                 STATE     READ WRITE CKSUM
        BackupPool                           DEGRADED     0     0     0
          raidz2-0                           DEGRADED     0     0     0
            3565325763100664358              FAULTED      0     0     0  was /dev/sdl1
            ata-WL4000GSA6472E_WOL240296472  ONLINE       0     0     0
            ata-WL4000GSA6472E_WOL240296433  ONLINE       0     0     0
            ata-WL4000GSA6472E_WOL240296453  ONLINE       0     0     0
            ata-WL4000GSA6472E_WOL240296550  ONLINE       0     0     0
            ata-WL4000GSA6472E_WOL240296551  ONLINE       0     0     0
            ata-WL4000GSA6472E_WOL240296528  ONLINE       0     0     0
            ata-WL4000GSA6472E_WOL240296529  ONLINE       0     0     0
            ata-WL4000GSA6472E_WOL240296552  ONLINE       0     0     0
            ata-WL4000GSA6472E_WOL240296452  ONLINE       0     0     0
            ata-WL4000GSA6472E_WOL240296450  ONLINE       0     0     0
            ata-WL4000GSA6472E_WOL240304897  ONLINE       0     0     0

I tracked down the faulted drive to be /dev/disk/by-id/ata-WL4000GSA6472E_WOL240296381. After the reboot, the label "/dev/sdl" has been assigned to a different drive in the same pool. This is why I did not try any zpool commands using "sdl" or "sdl1".

I visited http://zfsonlinux.org/msg/ZFS-8000-4J, but the recommendations did not work. Here is everything I have tried:

Code:
>zpool remove BackupPool ata-WL4000GSA6472E_WOL240296381
cannot remove ata-WL4000GSA6472E_WOL240296381: no such device in pool

>zpool add BackupPool spare ata-WL4000GSA6472E_WOL240296381
invalid vdev specification
use '-f' to override the following errors:
/dev/disk/by-id/ata-WL4000GSA6472E_WOL240296381-part1 is part of active pool 'BackupPool'

>zpool add BackupPool spare ata-WL4000GSA6472E_WOL240296381 -f
invalid vdev specification
the following errors must be manually repaired:
/dev/disk/by-id/ata-WL4000GSA6472E_WOL240296381-part1 is part of active pool 'BackupPool'

>zpool remove BackupPool ata-WL4000GSA6472E_WOL240296381-part1
cannot remove ata-WL4000GSA6472E_WOL240296381-part1: no such device in pool

>zpool replace BackupPool ata-WL4000GSA6472E_WOL240296381
invalid vdev specification
use '-f' to override the following errors:
/dev/disk/by-id/ata-WL4000GSA6472E_WOL240296381-part1 is part of active pool 'BackupPool'

>zpool replace BackupPool ata-WL4000GSA6472E_WOL240296381 -f
invalid vdev specification
the following errors must be manually repaired:
/dev/disk/by-id/ata-WL4000GSA6472E_WOL240296381-part1 is part of active pool 'BackupPool'

I have a spare drive arriving by mail on Monday, but if possible, I'd like to get drive WOL240296381 removed, reinserted into the vdev, and rebuilding before Monday. Any ideas?
 
Hi,

with
Code:
zpool status -P
you can see the full device paths.

Does using one of
Code:
zpool remove BackupPool 3565325763100664358
zpool remove BackupPool <full path of faulted dev>
work?
 
In my case "operation not supported on this type of pool"

Code:
root@pve01sc:~# zpool replace zfs3x8TB /dev/disk/by-
by-id/        by-label/     by-partlabel/ by-partuuid/  by-path/      by-uuid/     
root@pve01sc:~# zpool replace zfs3x8TB /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA1EK2GF
invalid vdev specification
use '-f' to override the following errors:
/dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA1EK2GF-part1 is part of active pool 'zfs3x8TB'
root@pve01sc:~# zpool replace zfs3x8TB /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA1EK2GF -f
invalid vdev specification
the following errors must be manually repaired:
/dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA1EK2GF-part1 is part of active pool 'zfs3x8TB'
root@pve01sc:~# zpool replace zfs3x8TB /dev/sde1 -f
invalid vdev specification
the following errors must be manually repaired:
/dev/sde1 is part of active pool 'zfs3x8TB'
root@pve01sc:~# zpool status -P
  pool: zfs3x8TB
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-4J
  scan: scrub repaired 0B in 0 days 06:30:00 with 0 errors on Sun Aug 11 06:54:01 2019
config:

        NAME                     STATE     READ WRITE CKSUM
        zfs3x8TB                 DEGRADED     0     0     0
          raidz1-0               DEGRADED     0     0     0
            /dev/sdd1            ONLINE       0     0     0
            /dev/sdc1            ONLINE       0     0     0
            8182309601696926322  UNAVAIL      0     0     0  was /dev/sdb1

errors: No known data errors
root@pve01sc:~# zpool remove zfs3x8TB 8182309601696926322
cannot remove 8182309601696926322: operation not supported on this type of pool
 
In my case "operation not supported on this type of pool"

Code:
root@pve01sc:~# zpool replace zfs3x8TB /dev/disk/by-
by-id/        by-label/     by-partlabel/ by-partuuid/  by-path/      by-uuid/    
root@pve01sc:~# zpool replace zfs3x8TB /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA1EK2GF
invalid vdev specification
use '-f' to override the following errors:
/dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA1EK2GF-part1 is part of active pool 'zfs3x8TB'
root@pve01sc:~# zpool replace zfs3x8TB /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA1EK2GF -f
invalid vdev specification
the following errors must be manually repaired:
/dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA1EK2GF-part1 is part of active pool 'zfs3x8TB'
root@pve01sc:~# zpool replace zfs3x8TB /dev/sde1 -f
invalid vdev specification
the following errors must be manually repaired:
/dev/sde1 is part of active pool 'zfs3x8TB'
root@pve01sc:~# zpool status -P
  pool: zfs3x8TB
state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-4J
  scan: scrub repaired 0B in 0 days 06:30:00 with 0 errors on Sun Aug 11 06:54:01 2019
config:

        NAME                     STATE     READ WRITE CKSUM
        zfs3x8TB                 DEGRADED     0     0     0
          raidz1-0               DEGRADED     0     0     0
            /dev/sdd1            ONLINE       0     0     0
            /dev/sdc1            ONLINE       0     0     0
            8182309601696926322  UNAVAIL      0     0     0  was /dev/sdb1

errors: No known data errors
root@pve01sc:~# zpool remove zfs3x8TB 8182309601696926322
cannot remove 8182309601696926322: operation not supported on this type of pool

Hi,
this seems to be an issue in ZFS, see the bug report. But there are workarounds. One is to wipe the label of the disk and then add it to the pool as a new disk. The exact commands are mentioned in the bug report.

Another way, if you can afford a little bit of downtime on the pool, is to export and re-import the pool:
Code:
zpool export zfs3x8TB
zpool import -d /dev/disk/by-id/ zfs3x8TB
And by using the above import command, the disks should be recognized by their ID in the future and not by which /dev/sdX they are.
 
Hi Fabian, thanks for your suggestion.
How do I put offline the pool correctly?

Stopping VMS ok, but then?

Are you thinking to edit Proxmox GUI to build the pools directly by disk IDs?
 
Hi Fabian, thanks for your suggestion.
How do I put offline the pool correctly?

Stopping VMS ok, but then?

Stop everything else that is using this pool. As long as the pool is busy, ZFS doesn't allow the export. Use lsof and fuser if you are unsure which processes are using the pool.

If this pool is your root pool, you won't be able to export it, since your system is running on it. Please try doing the other workaround in that case.

Are you thinking to edit Proxmox GUI to build the pools directly by disk IDs?

The installer currently doesn't do this, there is a long-standing open feature-request. Our import command for other pools should already use /dev/disk/by-id.
 
  • Like
Reactions: Stoiko Ivanov
Thanks for your advice and remarks.
I started using ZFS many years ago with Nas4free, I know it is a full-featured, very interesting project. The story achieved from years of ZFS related forum reading also taught that when the zpoll is the boot device it could become a full-featured nightmare.

So, the zpool is NOT my boot device.

Well, except VMS, there are no other script or schedules that runs using the ZFS pool.
Does Proxmox run in itself any process over it, having the zpool registered as available storage?

If you allow this OT, does ZFS currently reached the point to make possible volume growing just adding a disk to a RAIDZ1 pool (as the legacy RAID can)? I read about it as a planned feature.

Thanks.
 
Hi There, some update.

Stopping the VMs was enough to make me able exporting the pool.
But in some way it has imported it again on its way:

Code:
zpool export zfs3x8TB
root@pve01sc:~# zpool status
  pool: zfs3x8TB
 state: DEGRADED
  scan: resilvered 6.97G in 1 days 15:50:08 with 0 errors on Fri Feb 21 15:32:04 2020
config:

        NAME                        STATE     READ WRITE CKSUM
        zfs3x8TB                    DEGRADED     0     0     0
          raidz1-0                  DEGRADED     0     0     0
            wwn-0x5000c500b4fb3a02  ONLINE       0     0     0
            wwn-0x5000c500b520535b  ONLINE       0     0     0
            replacing-2             UNAVAIL      0     0     0  insufficient replicas
              8182309601696926322   OFFLINE      0     0     0  was /dev/disk/by-id/wwn-0x5000c500b4a947ff-part1
              1261038350977129077   UNAVAIL      0     0     0  was /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA18ZH53-part1

errors: No known data errors
root@pve01sc:~# zpool import -d /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA1EVYGN  zfs3x8TB
cannot import 'zfs3x8TB': a pool with that name already exists
use the form 'zpool import <pool | id> <newpool>' to give it a new name

As you can see, I tried to resilver with a new drive but I got a 2 seconds power glitch during Thuersday night.
Code:
Broadcast message from root@pve01sc (somewhere) (Thu Feb 20 01:45:04 2020):

Power failure on UPS pve01sc. Running on batteries.


Broadcast message from root@pve01sc (somewhere) (Thu Feb 20 01:45:06 2020):

Power has returned on UPS pve01sc...

The resilvering process has not ended up and has finished on Friday, but the APC Smart-UPS was not enough to avoid a bad end.
I'm really getting bored with ZFS.

In some forum a read about "zpool replace backup" command, could it help?
What could be the quickest and definitive solution in your opinion?


Thanks for your support.
 
If you allow this OT, does ZFS currently reached the point to make possible volume growing just adding a disk to a RAIDZ1 pool (as the legacy RAID can)? I read about it as a planned feature.

Seems like it's still being worked on, here is the relevant pull-request.

Hi There, some update.

Stopping the VMs was enough to make me able exporting the pool.
But in some way it has imported it again on its way:

Code:
zpool export zfs3x8TB
root@pve01sc:~# zpool status
  pool: zfs3x8TB
state: DEGRADED
  scan: resilvered 6.97G in 1 days 15:50:08 with 0 errors on Fri Feb 21 15:32:04 2020
config:

        NAME                        STATE     READ WRITE CKSUM
        zfs3x8TB                    DEGRADED     0     0     0
          raidz1-0                  DEGRADED     0     0     0
            wwn-0x5000c500b4fb3a02  ONLINE       0     0     0
            wwn-0x5000c500b520535b  ONLINE       0     0     0
            replacing-2             UNAVAIL      0     0     0  insufficient replicas
              8182309601696926322   OFFLINE      0     0     0  was /dev/disk/by-id/wwn-0x5000c500b4a947ff-part1
              1261038350977129077   UNAVAIL      0     0     0  was /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA18ZH53-part1

errors: No known data errors

Is this still the current status of the pool? What is the situation with the disks /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA18ZH53-part1 and /dev/disk/by-id/wwn-0x5000c500b4a947ff-part1? Are they not available anymore?

root@pve01sc:~# zpool import -d /dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA1EVYGN zfs3x8TB
cannot import 'zfs3x8TB': a pool with that name already exists
use the form 'zpool import <pool | id> <newpool>' to give it a new name[/CODE]

What does the disk dev/disk/by-id/ata-ST8000VN0022-2EL112_ZA1EVYGN have to do with the pool?

If the zpool is configured as a PVE storage, it will be imported periodically. You'll have to disable the storage in PVE first with
Code:
pvesm set <name of storage> --disable 1
then do export/import and re-enable the storage again with
Code:
pvesm set <name of storage> --disable 0

Also, I think when you use -d with a full path you have to specify the partition as well and if I remember correctly, you'll need to add one -d <path> per disk in the pool.

As you can see, I tried to resilver with a new drive but I got a 2 seconds power glitch during Thuersday night.
Code:
Broadcast message from root@pve01sc (somewhere) (Thu Feb 20 01:45:04 2020):

Power failure on UPS pve01sc. Running on batteries.


Broadcast message from root@pve01sc (somewhere) (Thu Feb 20 01:45:06 2020):

Power has returned on UPS pve01sc...

The resilvering process has not ended up and has finished on Friday, but the APC Smart-UPS was not enough to avoid a bad end.
I'm really getting bored with ZFS.

In some forum a read about "zpool replace backup" command, could it help?
What could be the quickest and definitive solution in your opinion?


Thanks for your support.
 
If I see this correctly you have a raidz1 and one of the disks is gone and needs to be replaced.
in that case `zpool replace <poolname> <olddevice> <new_device>` should be what you're looking for

however - read the man-page of `zpool` carefully (`man zpool`) - also as far as I remember you need to also provide the top-level vdev where you're performing the replace - ('raidz1-0') before 'olddevice'
also be careful with the '-f' switches !
 
Thanks for your suggestion, at the age of my last message I ended putting in another 8TB HDD and starting a resilvering that has going on for a week!!!!
Now everything is OK an my pool is made of device by IDs.
Why is the resilvering a so slow process on a modern XEON machine (not so much busy...)?
 
Why is the resilvering a so slow process on a modern XEON machine (not so much busy...)?
Depends on quite a few factors - (which disks, how fast is I/O in general, how full is the pool,....)
so it's hard to give a definitive pointer to the source
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!