Hello everyone,
I have a zpool (raidz1) built with 3 x 2TB disks.
Some disk start to report some error so I've taken 3 x 6TB disks and I want to replace the old ones.
I've followed these steps:
After that, I removed the old disk and I putted in the new one but I cannot replace it:
What am I doing wrong? Someone can help me on that?
##############################
EDIT 1:
additional note, running ":~# zpool status 'storagename' -v" there aren't resilvering processes.
EDIT 2:
On the new disk GPT table has been created before try to replace it
SOLVED NOTES:
zpool replace -o ashift=9 'storagename' <OLD> <NEW>
This issue seems related to the blocks size. I don't know exactly why but I can assume that all disk need to have the same block size:
I have a zpool (raidz1) built with 3 x 2TB disks.
Some disk start to report some error so I've taken 3 x 6TB disks and I want to replace the old ones.
I've followed these steps:
Code:
:~# zpool status -v 'storagename'
pool: 'storagename'
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(5) for details.
scan: scrub repaired 0B in 05:06:33 with 0 errors on Mon Sep 20 03:45:09 2021
config:
NAME STATE READ WRITE CKSUM
'storagename' ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-ID1 ONLINE 0 0 0
ata-ID2 ONLINE 0 0 0
ata-ID3 ONLINE 0 0 0
errors: No known data errors
Code:
:~# zpool list -v 'storagename'
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
'storagename' 5.44T 4.45T 1015G - - 11% 81% 1.00x ONLINE -
raidz1 5.44T 4.45T 1015G - - 11% 81.8% - ONLINE
ata-ID1 - - - - - - - - ONLINE
ata-ID2 - - - - - - - - ONLINE
ata-ID3 - - - - - - - - ONLINE
Code:
:~# zpool offline 'storagename' ata-ID3
Code:
:~# zpool status -v 'storagename'
pool: 'storagename'
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: scrub repaired 0B in 05:06:33 with 0 errors on Mon Sep 20 03:45:09 2021
config:
NAME STATE READ WRITE CKSUM
'storagename' DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
ata-ID1 ONLINE 0 0 0
ata-ID2 ONLINE 0 0 0
ata-ID3 OFFLINE 0 0 0
Code:
:~# zpool list -v 'storagename'
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
'storagename' 5.44T 4.45T 1015G - - 11% 81% 1.00x DEGRADED -
raidz1 5.44T 4.45T 1015G - - 11% 81.8% - DEGRADED
ata-ID1 - - - - - - - - ONLINE
ata-ID2 - - - - - - - - ONLINE
ata-ID3 - - - - - - - - OFFLINE
After that, I removed the old disk and I putted in the new one but I cannot replace it:
Code:
:~# zpool replace -f 'storagename' ata-ID3 ata-ID-NEW
cannot replace ata-ata-ID3 with ata-ID-NEW: already in replacing/spare config; wait for completion or use 'zpool detach'
:~# zpool replace -f 'storagename' guid-ata-ID3 ata-ID-NEW
cannot replace guid-ata-ID3 with ata-ID-NEW: already in replacing/spare config; wait for completion or use 'zpool detach'
What am I doing wrong? Someone can help me on that?
##############################
EDIT 1:
additional note, running ":~# zpool status 'storagename' -v" there aren't resilvering processes.
EDIT 2:
On the new disk GPT table has been created before try to replace it
Code:
:~# parted /dev/disk/by-id/ata-ID-NEW
GNU Parted 3.2
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: *** (scsi)
Disk /dev/sdc: 6001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6001GB 6001GB zfs zfs-2f8bc8b8c8806a59
9 6001GB 6001GB 8389kB
(parted) mklabel GPT
Warning: The existing disk label on /dev/sdc will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
(parted) q
Information: You may need to update /etc/fstab.
SOLVED NOTES:
zpool replace -o ashift=9 'storagename' <OLD> <NEW>
This issue seems related to the blocks size. I don't know exactly why but I can assume that all disk need to have the same block size:
Code:
:~# zpool status -v 'storagename'
pool: 'storagename'
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Sep 20 14:32:55 2021
2.00T scanned at 6.43G/s, 13.7M issued at 44K/s, 4.45T total
0B resilvered, 0.00% done, no estimated completion time
config:
NAME STATE READ WRITE CKSUM
'storagename' DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
ata-ID1 ONLINE 0 0 0
ata-ID2 ONLINE 0 0 0
replacing-2 DEGRADED 0 0 0
ata-ID3 OFFLINE 0 0 0
ata-ID-NEW ONLINE 0 0 0 block size: 512B configured, 4096B native
fdisk -l
Disk /dev/sdc: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Units: sectors of 1 * 512 = 512 bytes
Disk /dev/sde: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Disk /dev/sdf: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Last edited: