I am pretty new to proxmox but very impressed by it's possibilities.
Having setup a PVE 6.1 environment (3 x 2TB sata, AMD Ryzen 7 2700X Eight-Core CPU, 64GB memory) which is running some limited production as test without any problems.
One of the 3 disks (sda) is giving errors so i want to replace this disk.
root@pve:~# zpool status
pool: rpool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://zfsonlinux.org/msg/ZFS-8000-9P
scan: scrub repaired 12.3M in 0 days 02:03:55 with 0 errors on Thu May 14 16:05:52 2020
config:
NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
ata-ST2000DM008-2FR102_WFL2F03M-part3 ONLINE 0 0 0
ata-ST2000DM008-2FR102_WFL2EYKF-part3 DEGRADED 0 0 844 too many errors
ata-ST2000DM008-2FR102_WFL2HZDV-part3 ONLINE 0 0 0
errors: No known data errors
(After a "zpool clear" errors are coming back ...)
Unfortunately (?) the system is setup under UEFI...
This implies that the disknummering is different. (f.i. ata-ST2000DM008-2FR102_WFL2HZDV-part3 = /dev/sda3)
partition looks as follows:
root@pve:~# fdisk /dev/sda
Welcome to fdisk (util-linux 2.33.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): p
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000DM008-2FR1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 7EF859DD-866C-48F4-A093-668A77E3F46D
Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 3907029134 3905978511 1.8T Solaris /usr & Apple ZFS
Partition 1 does not start on physical sector boundary.
After studying available documentation i get the impression that ZFS replacement in UEFI environment is about the same under legacy bios environment.
But I am a little worried if the system wil still boot from the new disk after zfs replacement procedure.
After this procedure the partition table is the same. (done by sgdisk --replicate= ... )
However what has to be done withe partition 1 (BIOS boot and partition 2 (EFI System) ?
Can I just do a dd from one of the 2 other disks or exists a metacommand that will handle this ?
Or better is there a document availble describing this situation ?
Any input is welcome ...!
thanks in advance,
anton
Having setup a PVE 6.1 environment (3 x 2TB sata, AMD Ryzen 7 2700X Eight-Core CPU, 64GB memory) which is running some limited production as test without any problems.
One of the 3 disks (sda) is giving errors so i want to replace this disk.
root@pve:~# zpool status
pool: rpool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://zfsonlinux.org/msg/ZFS-8000-9P
scan: scrub repaired 12.3M in 0 days 02:03:55 with 0 errors on Thu May 14 16:05:52 2020
config:
NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
ata-ST2000DM008-2FR102_WFL2F03M-part3 ONLINE 0 0 0
ata-ST2000DM008-2FR102_WFL2EYKF-part3 DEGRADED 0 0 844 too many errors
ata-ST2000DM008-2FR102_WFL2HZDV-part3 ONLINE 0 0 0
errors: No known data errors
(After a "zpool clear" errors are coming back ...)
Unfortunately (?) the system is setup under UEFI...
This implies that the disknummering is different. (f.i. ata-ST2000DM008-2FR102_WFL2HZDV-part3 = /dev/sda3)
partition looks as follows:
root@pve:~# fdisk /dev/sda
Welcome to fdisk (util-linux 2.33.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): p
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000DM008-2FR1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 7EF859DD-866C-48F4-A093-668A77E3F46D
Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 3907029134 3905978511 1.8T Solaris /usr & Apple ZFS
Partition 1 does not start on physical sector boundary.
After studying available documentation i get the impression that ZFS replacement in UEFI environment is about the same under legacy bios environment.
But I am a little worried if the system wil still boot from the new disk after zfs replacement procedure.
After this procedure the partition table is the same. (done by sgdisk --replicate= ... )
However what has to be done withe partition 1 (BIOS boot and partition 2 (EFI System) ?
Can I just do a dd from one of the 2 other disks or exists a metacommand that will handle this ?
Or better is there a document availble describing this situation ?
Any input is welcome ...!
thanks in advance,
anton