zfsraid10 bad a hard drive I directly replaced, now the following problem, how do I solve?

luren549

New Member
Sep 5, 2022
3
0
1
zfsraid10 bad a hard drive I directly replaced, now the following problem, how do I solve?

Code:
 pool: rpool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: scrub repaired 0B in 02:50:28 with 0 errors on Sun Jul  9 03:14:29 2023
config:

        NAME                                         STATE     READ WRITE CKSUM
        rpool                                        DEGRADED     0     0     0
          mirror-0                                   DEGRADED     0     0     0
            ata-HGST_HUS726020ALA610_K5HP5S2G-part3  ONLINE       0     0     0
            3990747831446758763                      UNAVAIL      0     0     0  was /dev/disk/by-id/ata-HGST_HUS726020ALA610_N4G3Y0GY-part3
          mirror-1                                   ONLINE       0     0     0
            ata-HGST_HUS726020ALA610_K5HTJ9PG-part3  ONLINE       0     0     0
            ata-HGST_HUS726020ALA610_K5G65TPA-part3  ONLINE       0     0     0

errors: No known data errors
 
After changing the hard drive, did you use zpool replace in order to replace the disk in the rpool as well?

Since this seems to be a root device, please consult our respective section in the documentation for replacing boot devices [1]. Make sure that this is actually applicable in your case and then follow the instructions in the wiki carefully.

Always make sure to have a backup ready in case something goes wrong.

[1] https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysadmin_zfs_change_failed_dev
 
You should replace the disk in zfs with something like
Code:
zpool replace -f rpool ata-HGST_HUS726020ALA610_N4G3Y0GY-part3 3990747831446758763