Sanity Check - ZFS Drive Replacement

jgiddens

Member
Aug 24, 2021
9
1
8
54
Hey all, I have gone through thread after thread and think I have my procedure down, but I would love if one of the gurus could take a look and point out my obvious errors.

I have a ZFS pool called tank that is using 6/6 bays in my server:

pool: tank state: DEGRADED config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 wwn-0x5000c500845d2407 DEGRADED 0 0 0 too many errors wwn-0x5000c500845d26fb DEGRADED 0 0 0 too many errors mirror-1 ONLINE 0 0 0 wwn-0x50014ee213d579d9 ONLINE 0 0 0 wwn-0x50014ee2be80b462 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 wwn-0x50014ee265826ec0 ONLINE 0 0 0 wwn-0x50014ee2102d4b8b ONLINE 0 0 0

I intend to replace both of the two Mirror-0 drives with new, larger drives. I have a document showing what hard drives are in what bay by Serial, so I verify the drives are in which slot by:

root@proxmox:~# smartctl -a /dev/disk/by-id/wwn-0x5000c500845d2407 === START OF INFORMATION SECTION === Vendor: IBM-XIV Product: ST6000NM0054 D5 Revision: EC6D Compliance: SPC-4 User Capacity: 6,001,175,122,432 bytes [6.00 TB] Logical block size: 512 bytes Physical block size: 4096 bytes Formatted with type 2 protection 8 bytes of protection information per logical block LU is fully provisioned Rotation Rate: 7200 rpm Form Factor: 3.5 inches Logical Unit id: 0x5000c500845d2407 Serial number: Z4D38NME0000R607SNKL Device type: disk Transport protocol: SAS (SPL-4) Local Time is: Wed Dec 6 14:06:32 2023 CST SMART support is: Available - device has SMART capability. SMART support is: Enabled Temperature Warning: Enabled root@proxmox:~#

And do the same for the other drive too. I have now confirmed the two drives are in slots 3 and 4 on my chassis.

I plan to hot-pull one of the two drives and insert a new disk. I can then run:
ls -1 /dev/disk/by-id/

And get the new drive name. From there i run the following command:

sudo zpool replace -f tank wwn-0x5000c500845d2407 /dev/disk/by-id/NEW_DRIVE_NAME

Then wait for resilver to complete, and repeat the task with the second drive.

Then i set the pool to autoexpand using:

zpool set autoexpand=on tank

Is that correct?