How to replace boot drives (rpool) in a RAID1 Mirror

Chris&Patte

Renowned Member
Sep 3, 2013
43
0
71
Hello,

i got a rpool as ZFS-RAID1 with 2 drives where the system boots from.
I want to replace those aging drives by new ones.
I thought adding those two new drives to the mirror (i already have added one, another one is not included actually), sync it and then remove the old drives from the mirror.

I do not knew exactly how to do that (with which cmd), but i#m sure it is the correct way, BUT i#m also nearly sure to get trouble with booting that system as soon as the old drives had been replaced.

Is it like that? and if, how can i get the system to boot from the new drives?

pool: rpool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 01:06:43 with 0 errors on Sun May 14 01:30:56 2023
config:

Code:
NAME                                 STATE     READ WRITE CKSUM
rpool                                ONLINE       0     0     0
mirror-0                           ONLINE       0     0     0
ata-ST1000NX0443_W473C9CV-part3  ONLINE       0     0     0
scsi-35000c50033f678bb-part3     ONLINE       0     0     0
scsi-35000c5004279423b           ONLINE       0     0     0


Code:
the old disks
root@Iteanova019pve:~# fdisk -l /dev/sda
Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000NX0443
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D2E4BBF2-EE06-4487-A7EA-927CED29E352
Device       Start        End    Sectors  Size Type
/dev/sda1       34       2047       2014 1007K BIOS boot
/dev/sda2     2048    1050623    1048576  512M EFI System
/dev/sda3  1050624 1953525134 1952474511  931G Solaris /usr & Apple ZFS

root@Iteanova019pve:~# fdisk -l /dev/sdb
Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST91000640SS
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 85BCE5B6-8BF5-4E70-9A63-DB3B6305932D
Device       Start        End    Sectors  Size Type
/dev/sdb1       34       2047       2014 1007K BIOS boot
/dev/sdb2     2048    1050623    1048576  512M EFI System
/dev/sdb3  1050624 1953525134 1952474511  931G Solaris /usr & Apple ZFS

the new disks
root@Iteanova019pve:~# fdisk -l /dev/sdg
Disk /dev/sdg: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST91000640SS
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 84D694AD-BDFC-1A49-A1A9-24AF42DB730F
Device          Start        End    Sectors   Size Type
/dev/sdg1        2048 1953507327 1953505280 931.5G Solaris /usr & Apple ZFS
/dev/sdg9  1953507328 1953523711      16384     8M Solaris reserved 1

the unused disk
root@Iteanova019pve:~# fdisk -l /dev/sdh
Disk /dev/sdh: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST91000640SS
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

and boot is
Code:
root@Iteanova019pve:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
B3AB-03CE is configured with: uefi (versions: 5.11.22-7-pve, 5.15.102-1-pve, 5.15.30-2-pve)
B3AD-3F4B is configured with: uefi (versions: 5.11.22-7-pve, 5.15.102-1-pve, 5.15.30-2-pve)
 
Last edited:
That Proxmox is using grub on UEFI does not change the process?
I#m very unexperienced when it comes to UEFI booting, sorry...

Code:
root@Iteanova019pve:~# efibootmgr -v
BootCurrent: 000B
BootOrder: 000B,000A,0004,0009,0008,0007,0006,0000
Boot0000* Integrated NIC 1 Port 1 Partition 1   VenHw(3a191845-5f86-4e78-8fce-c4cff59f9daa)
Boot0001* Hard drive C: VenHw(d6c0639f-c705-4eb9-aa4f-5802d8823de6)..................@...........@....................................d.....d............A.....................T.r.a.n.s.c.e.n.d. .8.G.B...
Boot0002  IBA XE (X550) Slot 1800 v2444 BBS(128,IBA XE (X550) Slot 1800 v2444,0x0)......................................................................................A.....................I.B.A. .X.E. .(.X.5.5.0.). .S.l.o.t. .1.8.0.0. .v.2.4.4.4...
Boot0004* proxmox       HD(2,GPT,a65fb081-120d-4ee7-b272-af2469a778bf,0x800,0x100000)/File(\EFI\proxmox\grubx64.efi)
Boot0006* Linux Boot Manager    HD(2,GPT,4cafb4b8-2bb8-4e02-8fc5-bf93bc311df9,0x800,0x100000)/File(\EFI\systemd\systemd-bootx64.efi)
Boot0007* Linux Boot Manager    HD(2,GPT,da858211-da87-475e-9adf-6d066519ec8f,0x800,0x100000)/File(\EFI\systemd\systemd-bootx64.efi)
Boot0008* Linux Boot Manager    HD(2,GPT,8bca1712-93c5-45b1-b759-4641b5892a03,0x800,0x100000)/File(\EFI\systemd\systemd-bootx64.efi)
Boot0009* Linux Boot Manager    HD(2,GPT,b3819f8b-b844-44d1-bb12-72567cda70c2,0x800,0x100000)/File(\EFI\systemd\systemd-bootx64.efi)
Boot000A* Linux Boot Manager    HD(2,GPT,ec024c97-1be9-486a-836a-08bec1f1de6c,0x800,0x100000)/File(\EFI\systemd\systemd-bootx64.efi)
Boot000B* Linux Boot Manager    HD(2,GPT,3e31256e-da50-46ea-8da9-c78172dcc954,0x800,0x100000)/File(\EFI\systemd\systemd-bootx64.efi)
MirroredPercentageAbove4G: 0.00
MirrorMemoryBelow4GB: false
 
hm,
i followed your manual and tried to replace the old drive with the new one

Code:
root@Iteanova019pve:~# sgdisk /dev/sdb -R /dev/sdh
The operation has completed successfully.

root@Iteanova019pve:~# fdisk -l /dev/sdh
Disk /dev/sdh: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST91000640SS
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D2E4BBF2-EE06-4487-A7EA-927CED29E352
Device       Start        End    Sectors  Size Type
/dev/sdh1       34       2047       2014 1007K BIOS boot
/dev/sdh2     2048    1050623    1048576  512M EFI System
/dev/sdh3  1050624 1953525134 1952474511  931G Solaris /usr & Apple ZFS

root@Iteanova019pve:~# sgdisk -G /dev/sdh
The operation has completed successfully.


BUT
root@Iteanova019pve:~# zpool replace -f  rpool sdb3 sdh3
cannot replace sdb3 with sdh3: no such device in pool

root@Iteanova019pve:~# zpool replace -f  rpool /dev/sdb3 /dev/sdh3
cannot replace /dev/sdb3 with /dev/sdh3: no such device in pool
 
The names of the vdevs use the id of the disk (e.g. scsi-xxxxxxxxxxxx) , which is recommended when adding disks to ZFS pools. So you would need to use the ids of the disks for the replace command. You can list them via ls -alh /dev/disk/by-id

The ids of the old disks in your rpool seem to be as follows (please make sure to double-check):

Code:
ata-ST1000NX0443_W473C9CV-part3
scsi-35000c50033f678bb-part3

Can you post the output of the following commands (some of them you already posted, but I want to check the current status). Just to make sure....

Code:
ls -alh /dev/disk/by-id
zpool status -v
fdisk -l /dev/disk/by-id/scsi-35000c50033f678bb-part3
fdisk -l /dev/disk/by-id/scsi-35000c5004279423b
fdisk -l /dev/disk/by-id/ata-ST1000NX0443_W473C9CV-part3
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!