I have a 2 way SATADOM mirror today in ZFS that I use as my boot devices. I kept /var/log on them, and they've now worn out after a few years.
Since deploying this system I've added 3x 1TB Intel SSDs. Right now all they're being used for is a single 100G partition in a 3 way mirror as the special device in a ZFS pool, leaving me plenty of room for other activities.
Rather than replace my SATADOM 2 way mirror with more SATADOMs, what I would like to do instead is move to 3 way mirror of new partitions on the Intel SSDs.
This is the current partition layout of the Intel SSDs. There is no data on the p2 partition, I created it in anticipation for using it for something but never did, so the only relevant data is on the p1 partition, which is the special data for the SATA zpool.
This system uses the proxmox boot tool, and was installed using the built in ZFS option.
Current relevant ZPOOLS
I'm reading the procedure for this, which is as follows (from admin guide)
So procedurally speaking, I think my order of operations is as follows.
1.) one at a time, take the Intel SSDs and detach them from the sata1 zpool
2.) wipe the partition table
3.) use the sgdisk -r command to replicate the partition table from one of the 32G SATADOMs (old) to the 1TB Intel SSD (new).
3a.) QUESTION: does this create the system partitions and data partitions of approximately 32GB on the new drive and then leave a bunch of unallocated space at the end of the data partition?
4.) Add the new Intel SSD data partition to the boot mirror and run the proxmox boot tool format and init commands against it
5.) Assuming 3a is correct, create a new 100G partition (special) at the end of the 32G boot disk data partition
6.) Add this new 100G partition back to the zpool's 'sata1' special device 3 way mirror
7.) repeat for all 3 Intel SSDs
8.) Detach SATADOMs from the rpool zpool
Did I get this right?
Since deploying this system I've added 3x 1TB Intel SSDs. Right now all they're being used for is a single 100G partition in a 3 way mirror as the special device in a ZFS pool, leaving me plenty of room for other activities.
Rather than replace my SATADOM 2 way mirror with more SATADOMs, what I would like to do instead is move to 3 way mirror of new partitions on the Intel SSDs.
This is the current partition layout of the Intel SSDs. There is no data on the p2 partition, I created it in anticipation for using it for something but never did, so the only relevant data is on the p1 partition, which is the special data for the SATA zpool.
Code:
Disk /dev/nvme6n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: INTEL SSDPELKX010T8
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: *****************************
Device Start End Sectors Size Type
/dev/nvme6n1p1 2048 209717247 209715200 100G Linux filesystem
/dev/nvme6n1p2 209717248 1953523711 1743806464 831.5G Linux filesystem
This system uses the proxmox boot tool, and was installed using the built in ZFS option.
Current relevant ZPOOLS
Code:
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 00:01:15 with 0 errors on Sun Oct 12 00:25:17 2025
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-SuperMicro_SSD_SMC0515D93321BC91027-part3 ONLINE 0 0 0
ata-SuperMicro_SSD_SMC0515D93321BC93027-part3 ONLINE 0 0 0
errors: No known data errors
pool: sata1
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
scan: resilvered 8.16T in 2 days 02:25:13 with 0 errors on Wed Oct 15 10:03:02 2025
config:
NAME STATE READ WRITE CKSUM
sata1 DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
ata-WDC_WD161KRYZ-01AGBB0_AAAAAAAA ONLINE 0 0 0
ata-WDC_WD161KRYZ-01AGBB0_BBBBBBBB ONLINE 0 0 0
ata-WDC_WD161KRYZ-01AGBB0_CCCCCCCC ONLINE 0 0 0
ata-WDC_WD161KRYZ-01AGBB0_DDDDDDD ONLINE 0 0 0
ata-WDC_WD161KRYZ-01AGBB0_EEEEEEEE ONLINE 0 0 0
spare-5 DEGRADED 0 0 0
ata-WDC_WD161KRYZ-01AGBB0_FFFFFFFFF FAULTED 71 0 0 too many errors
ata-WDC_WD161KRYZ-01AGBB0_GGGGGGG ONLINE 0 0 0
special
mirror-1 ONLINE 0 0 0
nvme-INTEL_SSDPELKX010T8_AAAAAAAAAAAAAA-part1 ONLINE 0 0 0
nvme-INTEL_SSDPELKX010T8_BBBBBBBBBBBBBB-part1 ONLINE 0 0 0
nvme-INTEL_SSDPELKX010T8_CCCCCCCCCCCCCC-part1 ONLINE 0 0 0
cache
ata-Samsung_SSD_870_QVO_4TB_AAAAAAAAAAAAAA ONLINE 0 0 0
spares
ata-WDC_WD161KRYZ-01AGBB0_GGGGGGG INUSE currently in use
I'm reading the procedure for this, which is as follows (from admin guide)
Code:
Changing a failed bootable device
Depending on how Proxmox VE was installed it is either using systemd-boot or GRUB through proxmox-boot-tool [3] or plain GRUB as bootloader (see Host Bootloader). You can check by running:
# proxmox-boot-tool status
The first steps of copying the partition table, reissuing GUIDs and replacing the ZFS partition are the same. To make the system bootable from the new disk, different steps are needed which depend on the bootloader in use.
# sgdisk <healthy bootable device> -R <new device>
# sgdisk -G <new device>
# zpool replace -f <pool> <old zfs partition> <new zfs partition>
Note Use the zpool status -v command to monitor how far the resilvering process of the new disk has progressed.
With proxmox-boot-tool:
# proxmox-boot-tool format <new disk's ESP>
# proxmox-boot-tool init <new disk's ESP> [grub]
So procedurally speaking, I think my order of operations is as follows.
1.) one at a time, take the Intel SSDs and detach them from the sata1 zpool
2.) wipe the partition table
3.) use the sgdisk -r command to replicate the partition table from one of the 32G SATADOMs (old) to the 1TB Intel SSD (new).
3a.) QUESTION: does this create the system partitions and data partitions of approximately 32GB on the new drive and then leave a bunch of unallocated space at the end of the data partition?
4.) Add the new Intel SSD data partition to the boot mirror and run the proxmox boot tool format and init commands against it
5.) Assuming 3a is correct, create a new 100G partition (special) at the end of the 32G boot disk data partition
6.) Add this new 100G partition back to the zpool's 'sata1' special device 3 way mirror
7.) repeat for all 3 Intel SSDs
8.) Detach SATADOMs from the rpool zpool
Did I get this right?
Last edited: