Hey everyone, thank you for taking time to read my post. My proxmox setup has a raid mirror zfs pool (tank0) which uses 2x 4tb drives. Recently there were many errors with one of the drives so I replaced it and then resilvered using the "zpool replace" command
before "zpool replace" this is what "zpool status" looks like
so I replaced the drive then ran this command to replace the now dead/removed drive from the pool with the new drive
"zpool replace tank0 9544279319002279982 /dev/sdb" and I believe this is the error I made, because now that the zpool is back up and running the drive isnt identified by id anymore and instead is identified as /dev/sdb.
so I have really just 2 questions, the first is, is there anything wrong with running my pool this way? and 2nd question is what do I need to do to make the new drive show up in the pool by id like the others and not /dev/sdb?
thank you everyone for your time
before "zpool replace" this is what "zpool status" looks like
Code:
# zpool status -v tank0
pool: tank0
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
scan: scrub repaired 6M in 11:22:57 with 0 errors on Sun Apr 14 11:46:59 2024
config:
NAME STATE READ WRITE CKSUM
tank0 DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
ata-WDC_WD4000FYYZ-01UL1B2_WD-WCC130HETE02 ONLINE 0 0 0
ata-WDC_WD4000FYYZ-01UL1B2_WD-WCC131EAYU5R FAULTED 45 0 9 too many errors
"zpool replace tank0 9544279319002279982 /dev/sdb" and I believe this is the error I made, because now that the zpool is back up and running the drive isnt identified by id anymore and instead is identified as /dev/sdb.
Code:
# zpool status -v tank0
pool: tank0
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
scan: resilvered 3.28T in 12:45:28 with 0 errors on Wed Apr 17 04:49:44 2024
config:
NAME STATE READ WRITE CKSUM
tank0 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-WDC_WD4000FYYZ-01UL1B2_WD-WCC130HETE02 ONLINE 1 0 6
sdb ONLINE 0 0 6
errors: No known data errors
thank you everyone for your time