zfs raid1 does not boot with replaced failed disk

rpereyra

Renowned Member
May 19, 2008
44
0
71
Hi all

Proxmox versión: 6.3-2

I've replaced a failed ssd disk on a Proliant DL380 Gen10, and seems all well syncronized .

--------------
root@pm4:~# zpool status
pool: rpool
state: ONLINE
scan: resilvered 698G in 0 days 00:43:11 with 0 errors on Sat Oct 23 12:22:43 2021
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-331402ec015377e62-part3 ONLINE 0 0 0
scsi-331402ec015377e63 ONLINE 0 0 0

errors: No known data errors
root@pm4:~#

-----------------

However, when I disconnect the disk that was in the previous raid and leave only the new disk, the server does not boot.

It only boots when I plug the disk from the previous raid.

I guess it's something to do with Grub. Can anyone give me any help with this issue?

thanks
 
You can't just replace the drive. You need to partition it first, copy over the bootloader and so on and let ZFS use only partition 3 and not the whole drive.
There was a guide on how to to that but proxmox changed the bootloader with PVE 6.4 so the guide you now find in the wiki/documentation sin't working for your PVE6.3 anymore. Wasn't able to find a old 6.3 documentation.
 
Last edited:
My server seems using systemd-boot

Then I should run:

pve-efiboot-tool refresh

to syncronize boot loader on both drives ?

It's correct and safe ?

Thanks