Hi all,
When creating my ZFS Z3 pool, I used /dev/sd nodes to build the array.
However - I've found out the hard way that a reboot can have some devices swap the mapping of sd devices.
Hence you run across the issue where the array becomes degraded:
Checking through mappings, I verified this as being the case
To rectify, I've performed:
Whilst this is effective at getting the two devices back into the array, I'm aware this is resilvering both disks to sync back up.
Moving forward, I'm aware that the rest of the array members are still mapped to /dev/sd devices.
Is there a simple process of updating a the configuration for a ZFS to point to a /dev/disk/by-id node? Or does each disk need resilvering?
EDIT: Researching, it would appear the export/import method should work: https://plantroon.com/changing-disk-identifiers-in-zpool/
Giving my array time to finish resilvering before I attempt this.
When creating my ZFS Z3 pool, I used /dev/sd nodes to build the array.
However - I've found out the hard way that a reboot can have some devices swap the mapping of sd devices.
Hence you run across the issue where the array becomes degraded:
Code:
zpool status
pool: hddpool
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
scan: resilvered 603G in 01:54:48 with 0 errors on Fri May 24 19:59:44 2024
config:
NAME STATE READ WRITE CKSUM
hddpool DEGRADED 0 0 0
raidz3-0 DEGRADED 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
3601002332961008061 FAULTED 0 0 0 was /dev/sdi1
13921448654961834172 FAULTED 0 0 0 was /dev/sdj1
sdk ONLINE 0 0 0
sdl ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
sdo ONLINE 0 0 0
sdp ONLINE 0 0 6
logs
mirror-1 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
Checking through mappings, I verified this as being the case
Code:
3601002332961008061 FAULTED 0 0 0 was /dev/sdi1
13921448654961834172 FAULTED 0 0 0 was /dev/sdj1
children[4]:
type: 'disk'
id: 4
guid: 3601002332961008061
path: '/dev/sdi1'
devid: 'scsi-35000c500ec3e6fb7-part1'
phys_path: 'pci-0000:4b:00.0-scsi-0:2:20:0'
whole_disk: 1
not_present: 1
DTL: 82
create_txg: 4
com.delphix:vdev_zap_leaf: 134
children[5]:
type: 'disk'
id: 5
guid: 13921448654961834172
path: '/dev/sdj1'
devid: 'scsi-35000c500ec3f82cf-part1'
phys_path: 'pci-0000:4b:00.0-scsi-0:2:21:0'
whole_disk: 1
not_present: 1
DTL: 176
create_txg: 4
com.delphix:vdev_zap_leaf: 135
ls -la /dev/disk/by-id/scsi-35000c500ec3e6fb7
lrwxrwxrwx 1 root root 9 May 24 18:04 /dev/disk/by-id/scsi-35000c500ec3e6fb7 -> ../../sdj
ls -la /dev/disk/by-id/scsi-35000c500ec3f82cf
lrwxrwxrwx 1 root root 9 May 24 18:04 /dev/disk/by-id/scsi-35000c500ec3f82cf -> ../../sdi
sdi = 3f82cf
sdj = 3e6fb7
ZFS 8061 WAS sdi1. IS 3e6fb7, NOW sdj
ZFS 4172 was sdj1. IS 3f82cf, NOW sdi
To rectify, I've performed:
Code:
zpool labelclear -f /dev/disk/by-id/scsi-35000c500ec3e6fb7-part1
zpool replace -f hddpool 3601002332961008061 /dev/disk/by-id/scsi-35000c500ec3e6fb7-part1
zpool labelclear -f /dev/disk/by-id/scsi-35000c500ec3f82cf-part1
zpool replace -f hddpool 13921448654961834172 /dev/disk/by-id/scsi-35000c500ec3f82cf-part1
Whilst this is effective at getting the two devices back into the array, I'm aware this is resilvering both disks to sync back up.
Code:
zpool status
pool: hddpool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sat May 25 09:23:31 2024
31.7T / 36.6T scanned at 1.59G/s, 25.7T / 36.4T issued at 1.29G/s
4.30T resilvered, 70.52% done, 02:22:25 to go
config:
NAME STATE READ WRITE CKSUM
hddpool DEGRADED 0 0 0
raidz3-0 DEGRADED 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
replacing-4 DEGRADED 0 0 0
3601002332961008061 FAULTED 0 0 0 was /dev/sdi1
scsi-35000c500ec3e6fb7-part1 ONLINE 0 0 0 (resilvering)
replacing-5 DEGRADED 0 0 0
13921448654961834172 FAULTED 0 0 0 was /dev/sdj1
scsi-35000c500ec3f82cf-part1 ONLINE 0 0 0 (resilvering)
sdk ONLINE 0 0 0
sdl ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
sdo ONLINE 0 0 0
sdp ONLINE 0 0 6
logs
mirror-1 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
cache
scsi-35002538003827640 ONLINE 0 0 0 block size: 512B configured, 4096B native
scsi-35002538003827650 ONLINE 0 0 0 block size: 512B configured, 4096B native
errors: No known data errors
Moving forward, I'm aware that the rest of the array members are still mapped to /dev/sd devices.
Is there a simple process of updating a the configuration for a ZFS to point to a /dev/disk/by-id node? Or does each disk need resilvering?
EDIT: Researching, it would appear the export/import method should work: https://plantroon.com/changing-disk-identifiers-in-zpool/
Giving my array time to finish resilvering before I attempt this.
Last edited: