ZFS mirror - disk swapping

mike2

New Member
Feb 18, 2025
4
0
1
Hi,

I have three Proxmox servers running in a cluster.

They all have the same drives:

2x 256GB - Proxmox, ZFS RAID1

1x 1TB - storage for the VMS.

The 256GB drives are standard consumer drives. They're old and reporting several SMART errors, so I want to replace them.

I want to use Toshiba 400GB SATA 3 SSDs for this purpose. These are enterprise-class drives. Used, but in very good condition.

I tried doing this on my test host, but something went wrong every time. Mostly boot issues.

What should this process look like, step by step?

I would greatly appreciate your help.
 
So it should look like this:

Proxmox: 8.4.1
ata-Micron_1100_MTFDDAK256TBN_163613EDAB42 - old drive
ata-THNSF8400CCSE_Y7NS10TOTBST - new drive


The first steps of copying the partition table, reissuing GUIDs and replacing the ZFS partition are the same. To make the system bootable from the new disk, different steps are needed which depend on the bootloader in use.

Code:
sgdisk /dev/sda -R /dev/sdb
sgdisk -G /dev/sdb
zpool replace -f rpool ata-Micron_1100_MTFDDAK256TBN_163613EDAB42-part3 ata-THNSF8400CCSE_Y7NS10TOTBST-part3

Now I'm working on booting.
Code:
proxmox-boot-tool format /dev/sdb2
proxmox-boot-tool init /dev/sdb2
proxmox-boot-tool refresh

If I did everything correctly, I should get something like this from the proxmox-boot-tool status command:

Code:
root@pve02:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
9387-2D1C is configured with: uefi (versions: 6.8.12-11-pve, 6.8.12-9-pve)
D459-D744 is configured with: uefi (versions: 6.8.12-11-pve, 6.8.12-9-pve)
 
Last edited:
Sorry for invasion.
Installed 8.4.0 on old server (Intel SR1630GP) in uefi mode on ZFS mirror.
One of drives dead, I have replace with other one by https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_zfs_change_failed_dev .
All might be good, but this:
Bash:
root@prox:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
5AB6-1EAC is configured with: uefi (versions: 6.8.12-9-pve)
70FD-5519 is configured with: grub (versions: 6.8.12-9-pve)
- these uefi partitions show configured with different type - first (existed in mirror) with uefi, second (is new replaced) with grub.
Manual nothing says about this, I can't find this is normal or not, and what I should do and generally will be system booted next time or not.
Thank you.
 
And my answer.
I should again made
Bash:
proxmox-boot-tool init <uefi partition on new drive>
proxmox-boot-tool format <uefi partition on new drive>
proxmox-boot-tool clean
proxmox-boot-tool refresh
Second command init without grub parameter
System booting normally, zpool status, proxmox-boot-tool status and logs have no errors.
Possible done.