Hello,
i got a rpool as ZFS-RAID1 with 2 drives where the system boots from.
I want to replace those aging drives by new ones.
I thought adding those two new drives to the mirror (i already have added one, another one is not included actually), sync it and then remove the old drives from the mirror.
I do not knew exactly how to do that (with which cmd), but i#m sure it is the correct way, BUT i#m also nearly sure to get trouble with booting that system as soon as the old drives had been replaced.
Is it like that? and if, how can i get the system to boot from the new drives?
pool: rpool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 01:06:43 with 0 errors on Sun May 14 01:30:56 2023
config:
and boot is
i got a rpool as ZFS-RAID1 with 2 drives where the system boots from.
I want to replace those aging drives by new ones.
I thought adding those two new drives to the mirror (i already have added one, another one is not included actually), sync it and then remove the old drives from the mirror.
I do not knew exactly how to do that (with which cmd), but i#m sure it is the correct way, BUT i#m also nearly sure to get trouble with booting that system as soon as the old drives had been replaced.
Is it like that? and if, how can i get the system to boot from the new drives?
pool: rpool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 01:06:43 with 0 errors on Sun May 14 01:30:56 2023
config:
Code:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-ST1000NX0443_W473C9CV-part3 ONLINE 0 0 0
scsi-35000c50033f678bb-part3 ONLINE 0 0 0
scsi-35000c5004279423b ONLINE 0 0 0
Code:
the old disks
root@Iteanova019pve:~# fdisk -l /dev/sda
Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000NX0443
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D2E4BBF2-EE06-4487-A7EA-927CED29E352
Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 1953525134 1952474511 931G Solaris /usr & Apple ZFS
root@Iteanova019pve:~# fdisk -l /dev/sdb
Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST91000640SS
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 85BCE5B6-8BF5-4E70-9A63-DB3B6305932D
Device Start End Sectors Size Type
/dev/sdb1 34 2047 2014 1007K BIOS boot
/dev/sdb2 2048 1050623 1048576 512M EFI System
/dev/sdb3 1050624 1953525134 1952474511 931G Solaris /usr & Apple ZFS
the new disks
root@Iteanova019pve:~# fdisk -l /dev/sdg
Disk /dev/sdg: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST91000640SS
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 84D694AD-BDFC-1A49-A1A9-24AF42DB730F
Device Start End Sectors Size Type
/dev/sdg1 2048 1953507327 1953505280 931.5G Solaris /usr & Apple ZFS
/dev/sdg9 1953507328 1953523711 16384 8M Solaris reserved 1
the unused disk
root@Iteanova019pve:~# fdisk -l /dev/sdh
Disk /dev/sdh: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST91000640SS
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
and boot is
Code:
root@Iteanova019pve:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
B3AB-03CE is configured with: uefi (versions: 5.11.22-7-pve, 5.15.102-1-pve, 5.15.30-2-pve)
B3AD-3F4B is configured with: uefi (versions: 5.11.22-7-pve, 5.15.102-1-pve, 5.15.30-2-pve)
Last edited: