[SOLVED] [PVE 8.1] How to repliate rpool on ZFS mirror to separate single disk pool and boot from new pool?

Sep 1, 2022
240
46
33
40
Hello,

I've currently got a PVE node running the boot rpool/OS on a two-disk ZFS mirror.

I want to get those disks out and replace them with smaller disks (they're 1.9TB each, and I'd like to replace them with 200 GB disks).

The computer is a weird little x86 SBC and I can't easily get the disks out without unscrewing and disassembling everything (think an awkward Raspberry Pi case). Doing that whole dance twice (to replace each SSD one at a time and resilver the rpool) would be tedious and error prone, so I'm curious if there's an easier way.

I have a nice 1 TB external USB SSD that Proxmox sees without issue. I'd like to, if possible, replicate the rpool onto the external USB disk (alone in a single pool), make the external USB disk's pool bootable, and then boot off of that. Then I can more easily dissassemble the whole unit once, swap the new boot drives in, and replicate the rpool onto the new boot drives.

Is this possible? If so, is there a good guide somewhere? I'm very new to ZFS, so I think my googling might be failing me because I don't know the right question to ask.

Thanks!
 
Here is diagnostic data:

Code:
# pveversion -v
proxmox-ve: 8.1.0 (running kernel: 6.5.11-7-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.5: 6.5.11-7
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.5.11-6-pve-signed: 6.5.11-6
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
proxmox-kernel-6.2.16-19-pve: 6.2.16-19
proxmox-kernel-6.2.16-18-pve: 6.2.16-18
proxmox-kernel-6.2.16-15-pve: 6.2.16-15
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx7
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.4
pve-qemu-kvm: 8.1.2-4
pve-xtermjs: 5.3.0-2
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1

Code:
# zpool status
  pool: nvme-singledisk-pool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:16 with 0 errors on Sun Dec 10 00:24:17 2023
config:

        NAME                                                 STATE     READ WRITE CKSUM
        nvme-singledisk-pool                                 ONLINE       0     0     0
          nvme-Samsung_SSD_970_EVO_Plus_1TB_S6S1NS0T805122H  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:14 with 0 errors on Sun Dec 10 00:24:16 2023
config:

        NAME                                                     STATE     READ WRITE CKSUM
        rpool                                                    ONLINE       0     0     0
          mirror-0                                               ONLINE       0     0     0
            ata-SAMSUNG_MZ7LH1T9HMLT-000AZ_S3ZPNY0M601434-part3  ONLINE       0     0     0
            ata-SAMSUNG_MZ7LH1T9HMLT-000AZ_S3ZPNY0M603081-part3  ONLINE       0     0     0

errors: No known data errors


Code:
# sgdisk -p /dev/sda
Disk /dev/sda: 3750748848 sectors, 1.7 TiB
Model: SAMSUNG MZ7LH1T9
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): 740A1969-037E-4213-9AAF-61583A9EBCA0
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 3750748814
Partitions will be aligned on 8-sector boundaries
Total free space is 376013454 sectors (179.3 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              34            2047   1007.0 KiB  EF02 
   2            2048         2099199   1024.0 MiB  EF00 
   3         2099200      3374735360   1.6 TiB     BF01
  
   # sgdisk -p /dev/sdb
Disk /dev/sdb: 3750748848 sectors, 1.7 TiB
Model: SAMSUNG MZ7LH1T9
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): D35A1A32-3021-4CB9-ACD6-B7EC431A4E87
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 3750748814
Partitions will be aligned on 8-sector boundaries
Total free space is 376013454 sectors (179.3 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              34            2047   1007.0 KiB  EF02 
   2            2048         2099199   1024.0 MiB  EF00 
   3         2099200      3374735360   1.6 TiB     BF01
 
Is this possible? If so, is there a good guide somewhere? I'm very new to ZFS, so I think my googling might be failing me because I don't know the right question to ask.
yes, possible. Start by following the guide here: https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_change_failed_dev but stop before doing a zpool replace. instead, create a temporary pool on /dev/sdx3 (replacement disk part 3) and then zfs send/receive your original rpool over. reboot to rescue boot with the original disk removed. rename the pool to rpool. after that, you can add your next disk and follow normal replacement procedures. if you're doing this with the target disk on a remote system you can just name the pool rpool to begin with.
 
  • Like
Reactions: SInisterPisces
yes, possible. Start by following the guide here: https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_change_failed_dev but stop before doing a zpool replace. instead, create a temporary pool on /dev/sdx3 (replacement disk part 3) and then zfs send/receive your original rpool over. reboot to rescue boot with the original disk removed. rename the pool to rpool. after that, you can add your next disk and follow normal replacement procedures. if you're doing this with the target disk on a remote system you can just name the pool rpool to begin with.
Thanks! This is exactly what I needed. :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!