HDD in mirrored Raid died, ZFS pool gone

andre85

Member
Apr 16, 2024
30
1
8
Good morning.
I'm having quite a big problem.
One of my HDDs in a 2 bay mirrored raid finally died tonight.
My refurbished HDD made some noise for quite a while, but I felt kind of safe since I've had this issue with another refurbished HDD already and the "non refurbished" HDD took over just fine.

Not this time :(
I've removed now the HDD and with the removal the zfs pool seems to be gone.
When I type zpool status, I get "no pools available".

Is there any way to recover the zfs pool? I suppose the data is still on my working HDD.
If I can't recover the zfs pool, would there be any way to recover my data at least partly?

Thank you for any help!

André

PS: I will never ever buy a refurbished HDD anymore... 2 Failures in 3 years... My non refurbished HDD (same model - Seagate IronWolf Pro) is running with no issues for the past 8 years...
 
I don't think so. With the HDD installed the system didn't even recognize it anymore, whereas the HDD which is still installed is also showing up under Disks in proxmox.

I've also tried zpool import -f nas and zpool import -f- m nas, both also return "cannot import 'nas': one or more devices is currently unavailable"
Is there any other way to import the zfs pool with just one HDD? :(
 
can you see the device under lsblk -o name,uuid,fstype,mountpoint,label,size ?
Please show the output and ls -lA /dev/disk/by-id/ .
 
Yes, I can see it in both cases, it's the sda:
Code:
root@homeserver:~# ls -lA /dev/disk/by-id/
total 0
lrwxrwxrwx 1 root root  9 Feb 26 08:44 ata-ST12000VN0008-2YS101_ZV7033ZS -> ../../sda
lrwxrwxrwx 1 root root 10 Feb 26 08:44 ata-ST12000VN0008-2YS101_ZV7033ZS-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Feb 26 08:44 ata-ST12000VN0008-2YS101_ZV7033ZS-part9 -> ../../sda9
lrwxrwxrwx 1 root root 10 Feb 26 08:44 dm-name-pve-root -> ../../dm-1
lrwxrwxrwx 1 root root 10 Feb 26 08:44 dm-name-pve-swap -> ../../dm-0
lrwxrwxrwx 1 root root 10 Feb 26 08:44 dm-name-pve-vm--100--disk--0 -> ../../dm-8
lrwxrwxrwx 1 root root 10 Feb 26 08:44 dm-name-pve-vm--100--disk--1 -> ../../dm-9
lrwxrwxrwx 1 root root 10 Feb 26 08:44 dm-name-pve-vm--200--disk--0 -> ../../dm-6
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-name-pve-vm--200--disk--3 -> ../../dm-10
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-name-pve-vm--201--disk--0 -> ../../dm-16
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-name-pve-vm--201--disk--1 -> ../../dm-17
lrwxrwxrwx 1 root root 10 Feb 26 08:44 dm-name-pve-vm--210--disk--0 -> ../../dm-7
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-name-pve-vm--211--disk--0 -> ../../dm-12
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-name-pve-vm--213--disk--0 -> ../../dm-14
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-name-pve-vm--250--disk--0 -> ../../dm-18
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-name-pve-vm--251--disk--0 -> ../../dm-13
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-name-pve-vm--252--disk--0 -> ../../dm-11
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-name-pve-vm--254--disk--0 -> ../../dm-15
lrwxrwxrwx 1 root root 10 Feb 26 08:44 dm-uuid-LVM-Uv5YWJtdEy14jIaoZ2oCpIGg6iqYw2la2hPT0OQaxiijZGtHVTD8SEXdptuhfVmN -> ../../dm-8
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-uuid-LVM-Uv5YWJtdEy14jIaoZ2oCpIGg6iqYw2laA1JM5HIteqNhxQBGCqJFusGW2ithVmqN -> ../../dm-10
lrwxrwxrwx 1 root root 10 Feb 26 08:44 dm-uuid-LVM-Uv5YWJtdEy14jIaoZ2oCpIGg6iqYw2labRvsedxlkCyEo3ehC8UlnZD3BknNJN74 -> ../../dm-1
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-uuid-LVM-Uv5YWJtdEy14jIaoZ2oCpIGg6iqYw2laCpfDW6limQd34fikKlJnDmboBHgz8xyM -> ../../dm-15
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-uuid-LVM-Uv5YWJtdEy14jIaoZ2oCpIGg6iqYw2laft0nMlqd3r0WtJCDMMnJF9aPUtO1NmJf -> ../../dm-16
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-uuid-LVM-Uv5YWJtdEy14jIaoZ2oCpIGg6iqYw2lagq33TK0TfgwILqPD7zH1djyAKxev3oBc -> ../../dm-11
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-uuid-LVM-Uv5YWJtdEy14jIaoZ2oCpIGg6iqYw2laIzNs4djrcKeqs0BN4SJEvZtoaIFEdv0D -> ../../dm-13
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-uuid-LVM-Uv5YWJtdEy14jIaoZ2oCpIGg6iqYw2lajZZCWoFz5d2D6KDPbQnlxlI0Ycb1J7sD -> ../../dm-14
lrwxrwxrwx 1 root root 10 Feb 26 08:44 dm-uuid-LVM-Uv5YWJtdEy14jIaoZ2oCpIGg6iqYw2lam28xfiUBK384RYiOUemElmUsVPDqg7th -> ../../dm-6
lrwxrwxrwx 1 root root 10 Feb 26 08:44 dm-uuid-LVM-Uv5YWJtdEy14jIaoZ2oCpIGg6iqYw2lanSHBVJUF1z7XyADBe3IF2JS9sJX6shjT -> ../../dm-7
lrwxrwxrwx 1 root root 10 Feb 26 08:44 dm-uuid-LVM-Uv5YWJtdEy14jIaoZ2oCpIGg6iqYw2lanXJKC3cWS9u8QWYux2NPeTgcs7fvNdM3 -> ../../dm-9
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-uuid-LVM-Uv5YWJtdEy14jIaoZ2oCpIGg6iqYw2lap4Fku31eWsTfocsskBgzNUGQFA5sslVZ -> ../../dm-18
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-uuid-LVM-Uv5YWJtdEy14jIaoZ2oCpIGg6iqYw2laPjSKfJdmwfCwTPj8Bzhb8vY5MtpLDuVM -> ../../dm-12
lrwxrwxrwx 1 root root 10 Feb 26 08:44 dm-uuid-LVM-Uv5YWJtdEy14jIaoZ2oCpIGg6iqYw2lappIWoIr9z3hWhWDftLNfcO5QLL7YrrPC -> ../../dm-0
lrwxrwxrwx 1 root root 11 Feb 26 08:44 dm-uuid-LVM-Uv5YWJtdEy14jIaoZ2oCpIGg6iqYw2larrCD1EbKRn584BkcqYcYZTr8Skz19neo -> ../../dm-17
lrwxrwxrwx 1 root root 15 Feb 26 08:44 lvm-pv-uuid-uH2ftR-8ITb-keP5-Wu3M-FJqs-iH1a-4JsgJG -> ../../nvme0n1p3
lrwxrwxrwx 1 root root 13 Feb 26 08:44 nvme-CT1000P3SSD8_2319E6D37952 -> ../../nvme0n1
lrwxrwxrwx 1 root root 13 Feb 26 08:44 nvme-CT1000P3SSD8_2319E6D37952_1 -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Feb 26 08:44 nvme-CT1000P3SSD8_2319E6D37952_1-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Feb 26 08:44 nvme-CT1000P3SSD8_2319E6D37952_1-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root 15 Feb 26 08:44 nvme-CT1000P3SSD8_2319E6D37952_1-part3 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root 15 Feb 26 08:44 nvme-CT1000P3SSD8_2319E6D37952-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Feb 26 08:44 nvme-CT1000P3SSD8_2319E6D37952-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root 15 Feb 26 08:44 nvme-CT1000P3SSD8_2319E6D37952-part3 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root 13 Feb 26 08:44 nvme-nvme.c0a9-323331394536443337393532-435431303030503353534438-00000001 -> ../../nvme0n1
lrwxrwxrwx 1 root root 15 Feb 26 08:44 nvme-nvme.c0a9-323331394536443337393532-435431303030503353534438-00000001-part1 -> ../../nvme0n1p1
lrwxrwxrwx 1 root root 15 Feb 26 08:44 nvme-nvme.c0a9-323331394536443337393532-435431303030503353534438-00000001-part2 -> ../../nvme0n1p2
lrwxrwxrwx 1 root root 15 Feb 26 08:44 nvme-nvme.c0a9-323331394536443337393532-435431303030503353534438-00000001-part3 -> ../../nvme0n1p3
lrwxrwxrwx 1 root root  9 Feb 26 08:44 usb-Micron_CT1000X6SSD9_2401E48DAAB8-0:0 -> ../../sdb
lrwxrwxrwx 1 root root 10 Feb 26 08:44 usb-Micron_CT1000X6SSD9_2401E48DAAB8-0:0-part1 -> ../../sdb1
lrwxrwxrwx 1 root root  9 Feb 26 08:44 wwn-0x5000c500dcf4d7e8 -> ../../sda
lrwxrwxrwx 1 root root 10 Feb 26 08:44 wwn-0x5000c500dcf4d7e8-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Feb 26 08:44 wwn-0x5000c500dcf4d7e8-part9 -> ../../sda9
root@homeserver:~# lsblk -o name,uuid,fstype,mountpoint,label,size
NAME                         UUID                                   FSTYPE      MOUNTPOINT          LABEL     SIZE
sda                                                                                                          10.9T
├─sda1                       7945476069743355628                    zfs_member                      nas      10.9T
└─sda9                                                                                                          8M
sdb                                                                                                         931.5G
└─sdb1                       605F-3111                              exfat       /mnt/ext_ssd_backup ext_SSD 931.5G
nvme0n1                                                                                                     931.5G
├─nvme0n1p1                                                                                                  1007K
├─nvme0n1p2                  31BA-147E                              vfat        /boot/efi                       1G
└─nvme0n1p3                  uH2ftR-8ITb-keP5-Wu3M-FJqs-iH1a-4JsgJG LVM2_member                             930.5G
  ├─pve-swap                 fde5487b-14c8-4b31-8494-0a4e42a39b30   swap        [SWAP]                          8G
  ├─pve-root                 1b9ba669-4ae3-4cd4-a27a-f55c6d0d02da   ext4        /                              96G
  ├─pve-data_tmeta                                                                                            8.1G
  │ └─pve-data-tpool                                                                                        794.3G
  │   ├─pve-data                                                                                            794.3G
  │   ├─pve-vm--200--disk--0 74574c4c-6e58-4a40-b2eb-e136427602f3   ext4                                        8G
  │   ├─pve-vm--210--disk--0 7c74d54a-334d-4060-a089-993899582d69   ext4                                        8G
  │   ├─pve-vm--100--disk--0                                                                                    4M
  │   ├─pve-vm--100--disk--1                                                                                   32G
  │   ├─pve-vm--200--disk--3 1514e9a1-784e-4089-8b09-44c3f0dc97e0   ext4                                        1G
  │   ├─pve-vm--252--disk--0 26d84377-0380-404d-b8d6-7209314c9f4e   ext4                                       15G
  │   ├─pve-vm--211--disk--0 1b9d9462-ac7e-42d9-8eaa-4ff0e7cf6304   ext4                                       20G
  │   ├─pve-vm--251--disk--0 34177da7-dd5d-4754-b8bf-c7b34cca6138   ext4                                       20G
  │   ├─pve-vm--213--disk--0 593ae9de-0471-4931-8df1-02319723631c   ext4                                       10G
  │   ├─pve-vm--254--disk--0 212ceb74-844f-48ae-b5fe-5fcc65aa9acc   ext4                                       10G
  │   ├─pve-vm--201--disk--0 53c43093-18f2-4d78-9d0c-741f9c73964d   ext4                                       20G
  │   ├─pve-vm--201--disk--1 2a16c561-a758-4a08-8940-82c44f1ee188   ext4                                       20G
  │   └─pve-vm--250--disk--0 d29754a6-e4de-4ebd-9284-a2388c57ab54   ext4                                       15G
  └─pve-data_tdata                                                                                          794.3G
    └─pve-data-tpool                                                                                        794.3G
      ├─pve-data                                                                                            794.3G
      ├─pve-vm--200--disk--0 74574c4c-6e58-4a40-b2eb-e136427602f3   ext4                                        8G
      ├─pve-vm--210--disk--0 7c74d54a-334d-4060-a089-993899582d69   ext4                                        8G
      ├─pve-vm--100--disk--0                                                                                    4M
      ├─pve-vm--100--disk--1                                                                                   32G
      ├─pve-vm--200--disk--3 1514e9a1-784e-4089-8b09-44c3f0dc97e0   ext4                                        1G
      ├─pve-vm--252--disk--0 26d84377-0380-404d-b8d6-7209314c9f4e   ext4                                       15G
      ├─pve-vm--211--disk--0 1b9d9462-ac7e-42d9-8eaa-4ff0e7cf6304   ext4                                       20G
      ├─pve-vm--251--disk--0 34177da7-dd5d-4754-b8bf-c7b34cca6138   ext4                                       20G
      ├─pve-vm--213--disk--0 593ae9de-0471-4931-8df1-02319723631c   ext4                                       10G
      ├─pve-vm--254--disk--0 212ceb74-844f-48ae-b5fe-5fcc65aa9acc   ext4                                       10G
      ├─pve-vm--201--disk--0 53c43093-18f2-4d78-9d0c-741f9c73964d   ext4                                       20G
      ├─pve-vm--201--disk--1 2a16c561-a758-4a08-8940-82c44f1ee188   ext4                                       20G
      └─pve-vm--250--disk--0 d29754a6-e4de-4ebd-9284-a2388c57ab54   ext4                                       15G
root@homeserver:~#

Maybe some additional info:
I've hooked up the "faulty hdd" to my windows pc, installed OpenZFS and tried the same. The hdd stoped making the weird noise. When I try to import using zpool import -f nas I get the same error. I might just insert it into the proxmox server again and see if this works :o
 
Last edited:
zpool status and zpool list -v both return "no pools available".
zfs list -r nas returns "cannot open 'nas': dataset does not exist"

I've tried it on both HDDs - the "faulty one" under windows and the "good one" in proxmox.
Both return the same errors.
 
I've inserted the faulty HDD back to proxmox now. It's doing it's weird noise again, but it shows under Disks this time and when I run zpool import it gives me this output:

Code:
root@homeserver:~# zpool import
   pool: nas
     id: 7945476069743355628
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        nas                                    ONLINE
          mirror-0                             ONLINE
            ata-ST12000VN0008-2YS101_ZV7033ZS  ONLINE
            ata-ST12000NE0007-2GT116_ZJV2EYB4  ONLINE
          indirect-1                           ONLINE

When I try to run zpool import nas however, it gives me the error: "cannot import 'nas': no such pool available"

If I run zpool import -f -m nas i get the error "cannot import 'nas': one or more devices is currently unavailable".

Running zfs list -r nas give the dataset does not exist error again.
 
I've now removed the "healthy HDD" and tried accessing it on my windows PC.
zpool import returns this:
Code:
  pool: nas
    id: 7945476069743355628
 state: DEGRADED
status: The pool was last accessed by another system.
action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
config:

        nas                                          DEGRADED
          mirror-0                                   DEGRADED
            physicaldrive2                           ONLINE
            ata-ST12000NE0007-2GT116_ZJV2EYB4-part1  UNAVAIL  cannot open
          indirect-1                                 ONLINE

However when I try to import it using zpool import -f nas it's also showing me the error that one or more devices are currently unavailable. (even though it says in the status, that it could be imported despite missing or damaged devices).
 
Can you try this:
Code:
zpool import -d /dev/disk/by-id/ata-ST12000VN0008-2YS101_ZV7033ZS nas
 
Thanks, tried that, but it's also giving the error that "no such pool available"
What's also odd is, that sometimes when I use zpool import, it won't show the nas pool.
I've got that feeling that probably both hdd broke down.
Code:
root@homeserver:~# zpool import -d /dev/disk/by-id/ata-ST12000VN0008-2YS101_ZV7033ZS nas
cannot import 'nas': no such pool available
root@homeserver:~# zpool import
no pools available to import
root@homeserver:~# zpool import
no pools available to import
root@homeserver:~# zpool import
no pools available to import
root@homeserver:~# zpool import
   pool: nas
     id: 7945476069743355628
  state: DEGRADED
status: One or more devices contains corrupted data.
 action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
 config:

        nas                                    DEGRADED
          mirror-0                             DEGRADED
            ata-ST12000VN0008-2YS101_ZV7033ZS  ONLINE
            ata-ST12000NE0007-2GT116_ZJV2EYB4  UNAVAIL
          indirect-1                           ONLINE
root@homeserver:~# zpool import -d /dev/disk/by-id/ata-ST12000VN0008-2YS101_ZV7033ZS nas
cannot import 'nas': no such pool available
 
Probably a broken SATA cable or connector?
Edit: Could be a memory problem too. A mem test can't hurt.
 
Last edited:
That's something I've also thought, so I've replaced both SATA cables. Still the same issue :(
I think I'll just give up. Luckily it wasn't terribly important data stored to that NAS.
 
Last try:
Code:
zpool import -a -f -d /dev/disk/by-id/ata-ST12000VN0008-2YS101_ZV7033ZS
Same command but without the pool name.
 
I've inserted the faulty HDD back to proxmox now. It's doing it's weird noise again, but it shows under Disks this time and when I run zpool import it gives me this output:

Code:
root@homeserver:~# zpool import
   pool: nas
     id: 7945476069743355628
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        nas                                    ONLINE
          mirror-0                             ONLINE
            ata-ST12000VN0008-2YS101_ZV7033ZS  ONLINE
            ata-ST12000NE0007-2GT116_ZJV2EYB4  ONLINE
          indirect-1                           ONLINE

When I try to run zpool import nas however, it gives me the error: "cannot import 'nas': no such pool available"

If I run zpool import -f -m nas i get the error "cannot import 'nas': one or more devices is currently unavailable".

Running zfs list -r nas give the dataset does not exist error again.
If u use import and get this output your pool is now imported