[SOLVED] ZFS pool restore

Serhioromano

Member
Jun 12, 2023
30
7
8
I had sde disk fail with too many errors. i run clear label and then replace commands and it started to replace the same disk. I thout the problem is because it is USB. But during recilvering I had too many errors again. So I think it is disk. I replaced the disk and now want to start replace again but I do not understand how to run replace from here?

Code:
root@pve:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
loop0                          7:0    0     8G  0 loop
sda                            8:0    0 238.5G  0 disk
└─sda1                         8:1    0 238.5G  0 part /mnt/SSD
sdb                            8:16   0   5.5T  0 disk
├─sdb1                         8:17   0   5.5T  0 part
└─sdb9                         8:25   0     8M  0 part
sdc                            8:32   0   5.5T  0 disk
├─sdc1                         8:33   0   5.5T  0 part
└─sdc9                         8:41   0     8M  0 part
sdd                            8:48   0   5.5T  0 disk
├─sdd1                         8:49   0   5.5T  0 part
└─sdd9                         8:57   0     8M  0 part
sde                            8:64   0   5.5T  0 disk
sdf                            8:80   0   5.5T  0 disk
├─sdf1                         8:81   0   5.5T  0 part
└─sdf9                         8:89   0     8M  0 part
nvme0n1                      259:0    0 953.9G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0     1G  0 part /boot/efi
└─nvme0n1p3                  259:3    0 952.9G  0 part
  ├─pve-swap                 252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 252:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           252:2    0   8.3G  0 lvm
  │ └─pve-data-tpool         252:4    0 816.2G  0 lvm
  │   ├─pve-data             252:5    0 816.2G  1 lvm
  │   ├─pve-vm--123--disk--0 252:6    0     4M  0 lvm
  │   ├─pve-vm--123--disk--1 252:7    0    32G  0 lvm
  │   ├─pve-vm--102--disk--0 252:8    0     4M  0 lvm
  │   ├─pve-vm--102--disk--1 252:9    0   240G  0 lvm
  │   ├─pve-vm--102--disk--2 252:10   0     4M  0 lvm
  │   ├─pve-vm--106--disk--0 252:11   0     8G  0 lvm
  │   └─pve-vm--103--disk--0 252:12   0     8G  0 lvm
  └─pve-data_tdata           252:3    0 816.2G  0 lvm
    └─pve-data-tpool         252:4    0 816.2G  0 lvm
      ├─pve-data             252:5    0 816.2G  1 lvm
      ├─pve-vm--123--disk--0 252:6    0     4M  0 lvm
      ├─pve-vm--123--disk--1 252:7    0    32G  0 lvm
      ├─pve-vm--102--disk--0 252:8    0     4M  0 lvm
      ├─pve-vm--102--disk--1 252:9    0   240G  0 lvm
      ├─pve-vm--102--disk--2 252:10   0     4M  0 lvm
      ├─pve-vm--106--disk--0 252:11   0     8G  0 lvm
      └─pve-vm--103--disk--0 252:12   0     8G  0 lvm

root@pve:~# zpool status -v tank
  pool: tank
 state: DEGRADED
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: resilvered 80.8G in 01:12:33 with 0 errors on Mon Feb  5 20:14:40 2024
config:

        NAME                       STATE     READ WRITE CKSUM
        tank                       DEGRADED     0     0     0
          raidz2-0                 DEGRADED     0     0     0
            sdb                    ONLINE       0     0     0
            sdc                    ONLINE       0     0     0
            sdd                    ONLINE       0     0     0
            replacing-3            UNAVAIL      0     0     0  insufficient replicas
              9903000246701478573  UNAVAIL      0     0     0  was /dev/sde1/old
              sde                  REMOVED      0     0     0
            sdf                    ONLINE       0     0     0

errors: No known data errors