[TUTORIAL] HOWTO: Add a slightly smaller mirror to single ZFS root rpool

MrPete

Active Member
Aug 6, 2021
125
62
33
67
I just finished accomplishing this. It's not too tough once you know how to get past a few nigglies. Many thanks to @Dunuin and to the author linked below. And of course to the awesome ProxMox team.

Prerequisites
This sequence is based on a few assumptions. If your situation is significantly different, you'll need to modify! In particular, I'm using UEFI boot, NOT grub.
  • Functioning ProxMox host, with a single ZFS boot drive using UEFI. (proxmox-boot-tool status says so)
  • You're comfortable with creating USB sticks from ISO files, adjusting boot device sequence in host BIOS settings, etc. (I'll refer to the tools and methods but not explain in detail as that depends completely on your exact hardware etc.)
  • You have physical access to the host, including screen, keyboard and USB for boot. Or virtualized equivalent.
  • You have a backup in case of a serious mistake.
  • You understand my perspective: from cautious experience, I do NOT count on "simple" device names like /dev/nvme0n1 to be consistent. I've been burned too many times. Thus, while it takes a little more work, using /dev/disk/by-id/<whatever> is safer. (Hint: save a few strings in a file that's easy to copy-paste from on-screen. Saves a lot of typos.)
Big Picture Overview
We'll go through the following steps to accomplish this:
  1. Create a bootable USB stick with the current ProxMox installer.
  2. Shutdown host, install the new device (I assume it is blank.)
  3. Before booting again, verify you can boot to the ProxMox installer. Do NOT use it yet! Pull the USB and boot as normal.
  4. Verify new device is visible to the host.
  5. Capture the partition table of current rpool device, and appropriate by-id names of current and new devices.
  6. Define the needed partitions on new. Verify new is smaller than current. (If new is same or larger than current skip steps 7-12 following!)
  7. Setup boot management on new, create new-rpool on new
  8. Shutdown all VM's. Take a snapshot of rpool. Replicate rpool to new-rpool. Fix up settings appropriately.
  9. Shutdown host, Boot to BIOS. Set USB as first device, NEW storage as second. Boot to USB ProxMox installer, load in debug terminal (shell) mode.
  10. Verify access to rpool and new-rpool. Import and rename the pools (rpool to old-rpool, new-rpool to rpool.) Then export.
  11. Shutdown, pull USB, boot. All should be well except now you're booting from rpool on the new smaller drive.
  12. Verify all is well. Destroy old-rpool.
  13. Attach the two partitions to form a mirrored rpool. Verify. DONE!

DETAILS

Create a bootable USB stick with the current ProxMox installer.

Shutdown host, install the new device (I assume it is blank.)
  • Make sure the new device is visible to the BIOS of the host. (This was actually my biggest hassle: I had to make some tough hardware decisions because I didn't have 4 spare PCIe lanes for the new NVMe drive. It was invisible?!!!)
Before booting into ProxMox, verify you can boot to the ProxMox installer. Do NOT use it yet! Pull the USB and boot as normal.
  • This is more BIOS work, ensuring you have the correct boot order sequence.
  • If you can boot to the installer screen, you're fine. Just shut off the hardware, pull USB, and boot back to the usual ProxMox.
Verify new device is visible to the host
  • lsblk should show you current and new devices.
Capture the partition table of current rpool device, and appropriate by-id names of current and new devices.
  • You do not want to lose the partition table info.
  • ls /dev/disk/by-id will show all disks by ID. There may be multiple ID's for the exact same device. I chose the shortest one for current and new.
  • From here on out, I'm going to write /dev/disk/by-id/<current-id> but that really means (for example) /dev/disk/by-id/nvme-CT1000T500SSD5_234344CF32EB
  • To show the current device partition table: sgdisk -p /dev/disk/by-id/<current-id> (Be sure it follows the pattern in the next step. If not, something is seriously wrong (ie these are not the instructions you need.)
  • To save the partition table on disk: sgdisk /dev/disk/by-id/<current-id> -b /tmp/part-backupfile.bin
  • NOTE: as long as it's the standard partition table, you can easily recreate it without having saved it! They are all the same design. However, the unique UUID's will change... AND, each partition table in detail is specific to one drive or one exact hardware type:
    • Although partition three is defined the same (ie "use the rest of the disk"), the actual partition definition will be different, particularly in the case we are dealing with here, of adding a new smaller drive. The end sector number is different. Thus, you can't just load the partition table from old to new! You would have a broken partition and broken GPT (because a secondary copy of GPT is stored at the end of the disk.)
Define the needed partitions on new. Verify new is smaller than current. (If new is same or larger than current skip to the final attach step!)
  • You already have sgdisk -p for current, correct? OK, let's build the new disk:
#verify this disk is blank
sgdisk /dev/disk/by-id/<new-id> -p

#initialize GPT
sgdisk /dev/disk/by-id/<new-id> -o

# define the partitions, using 1 sector alignment to get it exact (NOTE: all will have new ID's, as is appropriate)
sgdisk /dev/disk/by-id/<new-id> -a 1 -n 1:34:2047 -t 1:ef02
sgdisk /dev/disk/by-id/<new-id> -a 1 -n 2::1050623 -t 2:ef00
sgdisk /dev/disk/by-id/<new-id> -a 1 -n 3 -t 3:bf01

# verify result
sgdisk /dev/disk/by-id/<new-id> -p
  • (If the final sector number on new is not smaller than current, skip to the last step below!)
Setup boot management on new, create new-rpool on new
  • do ls /dev/disk/by-id again. You should see your new disk with three new partitions labeled -part1, -part2 and -part3. We're going to use part2 and part3 ...
  • These commands will format and add the EFI partition to ProxMox boot manager:
proxmox-boot-tool format /dev/disk/by-id/<new-id>-part2 # set up a filesystem (according to type 'ef00' set above)
proxmox-boot-tool init /dev/disk/by-id/<new-id>-part2 # define this as an EFI boot partition known to ProxMox
update-initramfs -u # NOTE: this automagically does proxmox-boot-tool refresh
proxmox-boot-tool status # verify both disks are available for boot
  • Now let's create new-rpool in new device partition 3:
zpool create -o ashift=12 new-rpool /dev/disk/by-id/<new-id>-part3

Shutdown all VM's. Take a snapshot of rpool. Replicate rpool to new-rpool. Fix up settings appropriately.
  • Time to shut down your VM's. Yep, we need to go offline for the transfer to go quickly (and we will soon reboot the host again.)
  • Now, we'll use send/recv of a complete snapshot to get everything into the new smaller drive:
# Make recursive snapshot of the entire pool
zfs snapshot -r rpool@migrate
# Replicate to new-rpool (I was pleasantly surprised that this went quickly for me ;) )
zfs send -R rpool@migrate | zfs recv new-rpool -F
# set next boot to aim at new-rpool in new-rpool. Don't worry, it will be renamed correctly soon...
zpool set bootfs=new-rpool/ROOT/pve-1 new-rpool

Shutdown host, Boot to BIOS. Set USB as first device, NEW storage as second. Boot to USB ProxMox installer, load in debug terminal (shell) mode.
  • I assume you know how to accomplish these steps...
  • The current ProxMox installer has debug-install options in the main "Advanced" menu
  • Do Not Worry that you're beginning an install! We will NOT let it actually overwrite anything
  • HOWEVER, you MUST follow instructions carefully. It is certainly possible to goof and overwrite.
  • When you boot the installer into debug-terminal mode, you should see lots of text fly by, then a (root) "#" prompt. You can see a demo of this at the aaronlauter link below.
  • You are looking at an initramfs shell. Type "exit" and press enter to get to a live linux.
  • Now you have another "#" prompt. zpool import should list your available pools, including especially rpool and new-rpool
    • NOTE: It will say they were used in another system. That's true! You are running the installer Linux, not your normal ProxMox. That's why the ZFS pools are not active. :)
  • IF you do NOT see the pools in zpool import, do NOT go further! Pull the USB, shut off the machine, boot to ProxMox and diagnose.
Verify access to rpool and new-rpool. Import and rename the pools (rpool to old-rpool, new-rpool to rpool.) Then export.
  • If you're at this step, you should be in the ProxMox installer and have seen a good result of zpool import listing rpool and new-rpool as available.
  • Let's import and rename (step one) then export again (from the installer.) This will make the new smaller rpool become the live active one in ProxMox:
zpool import rpool old-rpool -f # import with rename, -f is forcing it
zpool import new-rpool rpool -f
zpool status # everything should look nice!

zpool export old-rpool
zpool export rpool
  • Congratulations, just about done
Shutdown, pull USB, boot. All should be well except now you're booting from rpool on the new smaller drive.
  • I assume you know how to do this. Clean up BIOS boot order as needed. Both disks should be visible as bootable, but I would boot from the new smaller disk.
  • proxmox-boot-tool status should look good
Verify all is well. Destroy old-rpool
  • zpool status should show rpool. old-rpool will likely show in zpool import
  • Look around. Ensure you are happy with the new rpool, the new drive in use as primary boot.
  • Let's finish off the old rpool:
zpool import old-rpool
zpool destroy old-rpool

Attach the two partitions to form a mirror for rpool. Verify. DONE!
NOTE: if you skipped to here from above, and didn't need to move over to a smaller disk, then add new to old like this:
zpool attach rpool /dev/disk/by-id/<old-id>-part3 /dev/disk/by-id/<new-id>-part3
zpool status
But if you DID have to move, you will add old to new:
zpool attach rpool /dev/disk/by-id/<new-id>-part3 /dev/disk/by-id/<old-id>-part3
zpool status

DONE.

References

https://aaronlauterer.com/blog/2021/proxmox-ve-migrate-to-smaller-root-disks/
https://forum.proxmox.com/threads/adding-2nd-boot-drive-is-anything-needed-on-partition-1.144164/
https://pve.proxmox.com/wiki/ZFS_on_Linux#_zfs_administration (portions of "Changing a failed bootable device")
 
Last edited:
  • Like
Reactions: Dunuin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!