[SOLVED] Need help with swapping a couple of small drives with bigger ones and switching to EFI on ZFS

Chicken76

Well-Known Member
Jun 26, 2017
52
2
48
44
I need some help replacing a couple of smaller drives with another couple of bigger ones, while maintaining the same ZFS filesystem but enlarging the pool and also switching to EFI boot.
This is on PVE 6.4 in preparation for upgrading to 7.

This is what /dev/sda and /dev/sdb look like right now:

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1952432093 1952430046 931G Solaris /usr & Apple ZFS
/dev/sda9 1952432094 1952448478 16385 8M Solaris reserved 1

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0



The steps that I think need to be done are as follows:

  1. Connect the bigger drives to the machine (/dev/sdc and /dev/sdd) and create GPT partition tables on them
  2. Partition the drives like this: (structure taken from a fresh install of a PVE 7 machine in EFI mode)
    • first partition:
      • start sector: 34
      • end sector: 2047
      • change partition type to 4 (BIOS boot)
      • NOTE: is this really necessary if I'm only going to boot in EFI mode? Also, how do I actually start at sector 34? fdisk won't let me start below 2048
    • second partition
      • start sector: 2048
      • end sector: 1050623
      • change partition type to 1 (EFI system)
    • third partition
      • start sector: 1050624
      • end sector: last sector minus a few megabytes worth of sectors
      • change partition type to 48 (Solaris /usr & Apple ZFS)
  3. Add the two new partitions to rpool
    • first check if autoexpand is on for rpool
      • zpool get autoexpand rpool
    • if it's off, then set it to on
      • zpool set autoexpand=on rpool
    • now add one of the partitions from the new drives to the mirror
      • zpool attach rpool /dev/sda2 /dev/sdc3
    • wait for rpool to resilver (check with: zpool status rpool) and attach the remaining partition to rpool
      • zpool attach rpool /dev/sda2 /dev/sdd3
  4. What do I need to do to the first partitions (BIOS boot) from the new drives? I need help here.
  5. The EFI system partitions need to be formatted as FAT32
    • mkfs.fat -F 32 /dev/sdc2
    • mkfs.fat -F 32 /dev/sdd2
  6. Populate the EFI system partitions and write bootloaders
    • I really need help with this step
  7. Boot in EFI mode for the first time
    • reboot, go into the BIOS and set the boot mode to UEFI and the first and second boot devices to be the two big drives (sdc & sdd)
  8. Assuming the machine booted fine, now it's time to remove the small drives from the ZFS array
    • check first that the array is all synced up with: zpool status rpool
    • optional: scrub the zfs pool to make sure the data is fine on the new drives before removing the old ones: zpool scrub rpool
    • remove the old drives from the pool
      • zpool detach rpool /dev/sda2
      • zpool detach rpool /dev/sdb2
  9. At this point all that's left to do is shut down the machine and remove the old drives.
There are a couple of steps (in bold) where I need help with. Other than that, is this list of steps complete?
 
I believe point 4 is for GRUB only.
I think point 5 and 6 can be done with proxmox-boot-tool, but you might need to rescue boot from the Proxmox ISO and chroot into the installation to run it.
I think the principle of adding two (larger) mirror disks and the removing the two smaller disks is good in principle. I have used it before as well.

You could test all this in a VM first to make sure it works ;-). Start with a fresh install of Proxmox with GRUB without OVMF (or a clone of your current drives) and see if your steps get just a install on bigger virtual disks with OVMF,
 
@leesteken Thank you for the advice. I'll try all the steps in a VM. However, I can't seem to find older ISO installers than 6.4 on the website.
On this page it says I need an installation between 5.4 and 6.3. Does anyone know if there are at least some checksums from an official source, so I don't use an ISO from WayBackMachine that contains some nasties?
 
On this page it says I need an installation between 5.4 and 6.3.
the howto is meant to say that you can use it with machines you originally setup with PVE 5.4 to 6.3 - not that you need this particular version?
(How could the phrasing be improved to make this more clear?)

else - if you're running 6.4 you can of course use the 6.4 installer (you just should not use an ISO with a newer ZFS version than what you're currently running)

NOTE: is this really necessary if I'm only going to boot in EFI mode? Also, how do I actually start at sector 34? fdisk won't let me start below 2048
I'd probably follow the replacing a bootable disk howto from the reference docs and then delete the last partition and create it to fill the 100%:
https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#_zfs_administration

I hope this helps!
 
@Stoiko Ivanov Thank you for the advice. I am still trying out all the steps in a virtual machine.

When adding the new partition to the pool, zpool complains like this:
Code:
root@pvetest:~# zpool attach rpool /dev/sda3 /dev/sdb2
cannot attach /dev/sdb2 to /dev/sda3: no such device in pool

I ended up adding it like this:
Code:
root@pvetest:/dev/disk/by-id# zpool attach rpool /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part3 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part2
This will be the preferred method going forward, right?

the howto is meant to say that you can use it with machines you originally setup with PVE 5.4 to 6.3 - not that you need this particular version?
(How could the phrasing be improved to make this more clear?)
Perhaps the wording should specify that versions between 5.4 and 6.3 create the 512MiB partition and the guide can work, while versions prior to 5.4 do not and since you cannot shrink a ZFS partition, a fresh install is the only option (unless the user remembers to never upgrade the pool)
 
root@pvetest:~# zpool attach rpool /dev/sda3 /dev/sdb2
why does one disk have 3 partitions and the other only 2? (if you follow the guide I linked both should end up with 3 partitions)?

This will be the preferred method going forward, right?
using /dev/disk/by-id paths is preferred and more stable to /dev/sdX paths - yes
 
why does one disk have 3 partitions and the other only 2? (if you follow the guide I linked both should end up with 3 partitions)?
That's because I didn't create the first partition (BIOS boot). How do you create a partition before sector 2048? Neither fdisk nor gdisk will let me.

But the good news is it worked even without the first partition. There was a warning but every step proceeded fine.

The only other warning I get after removing the smaller drive is in proxmox-boot-tool status:
Code:
root@pvetest:~# proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
ls: cannot access '/var/tmp/espmounts/EEBB-C2E0/vmlinuz-*': No such file or directory
EEBB-C2E0 is configured with: uefi (versions: 5.4.106-1-pve, 5.4.174-2-pve), grub (versions: )
 
That's because I didn't create the first partition (BIOS boot). How do you create a partition before sector 2048? Neither fdisk nor gdisk will let me.
Go intro the extra menu (experts only) and change the partition alignment to 1 sector (the exact names might be slightly different). Fortunately, you don't need this partition and you can always add it later (it does not have to be number 1).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!