zfs raid1 does not boot with replaced failed disk

rpereyra

Active Member
May 19, 2008
39
0
26
Hi all

Proxmox versión: 6.3-2

I've replaced a failed ssd disk on a Proliant DL380 Gen10, and seems all well syncronized .

--------------
root@pm4:~# zpool status
pool: rpool
state: ONLINE
scan: resilvered 698G in 0 days 00:43:11 with 0 errors on Sat Oct 23 12:22:43 2021
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-331402ec015377e62-part3 ONLINE 0 0 0
scsi-331402ec015377e63 ONLINE 0 0 0

errors: No known data errors
root@pm4:~#

-----------------

However, when I disconnect the disk that was in the previous raid and leave only the new disk, the server does not boot.

It only boots when I plug the disk from the previous raid.

I guess it's something to do with Grub. Can anyone give me any help with this issue?

thanks
 
You can't just replace the drive. You need to partition it first, copy over the bootloader and so on and let ZFS use only partition 3 and not the whole drive.
There was a guide on how to to that but proxmox changed the bootloader with PVE 6.4 so the guide you now find in the wiki/documentation sin't working for your PVE6.3 anymore. Wasn't able to find a old 6.3 documentation.
 
Last edited:
My server seems using systemd-boot

Then I should run:

pve-efiboot-tool refresh

to syncronize boot loader on both drives ?

It's correct and safe ?

Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!