[SOLVED] after replacing drive for degraded zfs pool drive identifies as /sdb and not /by-id/..

isaacc88

Member
Jan 30, 2023
3
0
6
Hey everyone, thank you for taking time to read my post. My proxmox setup has a raid mirror zfs pool (tank0) which uses 2x 4tb drives. Recently there were many errors with one of the drives so I replaced it and then resilvered using the "zpool replace" command

before "zpool replace" this is what "zpool status" looks like
Code:
# zpool status -v tank0
  pool: tank0
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
        repaired.
  scan: scrub repaired 6M in 11:22:57 with 0 errors on Sun Apr 14 11:46:59 2024
config:

        NAME                                            STATE     READ WRITE CKSUM
        tank0                                           DEGRADED     0     0     0
          mirror-0                                      DEGRADED     0     0     0
            ata-WDC_WD4000FYYZ-01UL1B2_WD-WCC130HETE02  ONLINE       0     0     0
            ata-WDC_WD4000FYYZ-01UL1B2_WD-WCC131EAYU5R  FAULTED     45     0     9  too many errors
so I replaced the drive then ran this command to replace the now dead/removed drive from the pool with the new drive
"zpool replace tank0 9544279319002279982 /dev/sdb" and I believe this is the error I made, because now that the zpool is back up and running the drive isnt identified by id anymore and instead is identified as /dev/sdb.
Code:
# zpool status -v tank0
  pool: tank0
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: resilvered 3.28T in 12:45:28 with 0 errors on Wed Apr 17 04:49:44 2024
config:

        NAME                                            STATE     READ WRITE CKSUM
        tank0                                           ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            ata-WDC_WD4000FYYZ-01UL1B2_WD-WCC130HETE02  ONLINE       1     0     6
            sdb                                         ONLINE       0     0     6

errors: No known data errors
so I have really just 2 questions, the first is, is there anything wrong with running my pool this way? and 2nd question is what do I need to do to make the new drive show up in the pool by id like the others and not /dev/sdb?

thank you everyone for your time
 
What will happen if due to adding other drives/reboot/update the /dev/sdb gets associated with another drive by Linux, won't the zpool get affected?
ZFS identifies the disks by reading the metadata stored on those disks. So it doesn`t matter what it was called when adding it to the pool or what it is named later. Benefit of /dev/disk/by-id/bus-vendor-model-serial is simply that you as the user could easier identifiy the disk later in case you need to replace it, as it will then tell you that "/dev/disk/by-id/bus-vendor-model-serial" failed and the unique serial or WWN is usually printed on the label of that disk.
 
Last edited:
  • Like
Reactions: gfngfn256
You don't need to remove the drive from the pool - it will have to resilver all over again if you do.

This can be solved with a simple ' zpool export && sleep 1; zpool import -a -f -d /dev/disk/by-id ' or similar long-form path.
thank you for this info, will keep that in mind for next time!