rpool disk replacement

pbo10

New Member
Aug 30, 2019
8
0
1
40
Hi

I recently installed Proxmox using ZFS raid 1 with 2 SSD drives for the rpool and I need to replace one of them. I know there's lots of guides for replacing a disk such as this one: https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks

But I understood there was a change in the recent updates to the bootloader (is grub even used now?), so can someone let me know if that information is still valid or is there now a different way that boot drives should be replaced when set up as mirrored ZFS drives?

Thanks
 
If recently is after 5.4 then yes, if the system was booted into EFI mode there's something a bit different. First, with RAID1 both disks have a bootloader installed, so even if one fails you are not in immediate danger. And if you did not boot with EFI everything is as before.

For replacing a ZFS created by our installer since PVE 5.4 you can take look at:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_zfs_administration (end of that section, "Changing a failed bootable device when using systemd-boot")
and https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_systemd_boot_setup
 
Thanks for the quick reply, so I've read through but still not 100% on this and I don't want to get it wrong.

The disk is offline and I've done the first 2 parts to copy the partition table and randomise the GUID:
sgdisk /dev/sdb -R /dev/sdc
sgdisk -G /dev/sdc

This is the rpool is it's current form:
Code:
        NAME                                                   STATE     READ WRITE CKSUM
        rpool                                                  DEGRADED     0     0     0
          mirror-0                                             DEGRADED     0     0     0
            ata-SQF-S25M8-512G-SAC_FF1A07931AB607013328-part3  ONLINE       0     0     0
            17086470446843778281                               OFFLINE      0     0     0  was /dev/disk/by-id/ata-SQF-S25M8-512G-SAC_DC11078C16EC06101361-part3

So to replace the disk should I now be using a disk GUID rather than /dev/sdc?

zpool replace -f rpool 17086470446843778281 <new zfs partition>

I'm not sure exactly how I get the correct GUID to use as <new zfs partition> for the replacement disk?
 
Not to worry, I found out I can see all the disk GUIDs with a simple ls -l /dev/disk/by-id/.

One issue I did find though is that because I was replacing the disk with the same disk, I had to remove it to another system and use zpool labelclear -f /dev/sdX3 before I was able to use the disk again. I couldn't find a way to clear the disk while still in the Proxmox system.
 
Hate to resurect an old thread, but is there a help on how to do the very same thing on system prior to 5.4?

I am running 5.3 and not updating because last update from 5.1 to 5.3 housed the install and I had to redo everything from scratch as I simply could not recover the VMs.

I folowed several how-to step by step.
The disks look identical yet I can boot from first but not from second.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!