[SOLVED] Stuck trying to expand ZFS boot drive

Rxunique

New Member
Feb 5, 2024
26
0
1
I had a pair of 200gb SSD with ZFS mirror, went through the process to successfully resilvered with a pair of 400GB, made sure each of the new drive can booth the system. But stuck couldn't expand the size.

I follow a couple thread here with export / import method and got data set busy

Code:
root@r730:~# zpool export -f rpool
cannot unmount '/': pool or dataset is busy

I then followed another thread using offline online method, all command executed but no change, see below

Code:
root@r730:~# zpool status rpool
  pool: rpool
 state: ONLINE
  scan: resilvered 1.31M in 00:00:00 with 0 errors on Sat Feb 17 15:12:59 2024
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda3    ONLINE       0     0     0
            sdb3    ONLINE       0     0     0

errors: No known data errors
root@r730:~# zpool list rpool
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool   185G  1.88G   183G        -         -     0%     1%  1.00x    ONLINE  -
root@r730:~# zpool offline rpool sda3
root@r730:~# zpool online -e rpool sda3
root@r730:~# zpool offline rpool sdb3
root@r730:~# zpool online -e rpool sdb3
root@r730:~# zpool list rpool
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool   185G  1.88G   183G        -         -     0%     1%  1.00x    ONLINE  -
root@r730:~# zpool get autoexpand rpool
NAME   PROPERTY    VALUE   SOURCE
rpool  autoexpand  on      local


Have I done anything wrong? or is there another method?

PS. auto expand is on from beginning, did it last to log a result here
 
Last edited:
found another post of similar issue, unfortunately not resolved

I used these command

Code:
# sgdisk <healthy bootable device> -R <new device>
# sgdisk -G <new device>
# zpool replace -f <pool> <old zfs partition> <new zfs partition>
# proxmox-boot-tool format <new disk's ESP>
# proxmox-boot-tool init <new disk's ESP>

I just noticed that in offcial PVE doc, I missed [grub] option when executing mine, it still worked, but would this be the cause on a remote chance of not expanding?

Code:
# proxmox-boot-tool init <new disk's ESP> [grub]
 
Enabling autoexpand and a reboot should usually do the trick. Maybe the partitions are not resized yet: what is the output of gdisk -l /dev/sda and gdisk -l /dev/sdb ?
 
  • Like
Reactions: Dunuin
Below is the following procedure I used to expand my ZFS root pool, hopefully it helps someone else.

Code:
Expanding a ZFS mirrored Root pool on Proxmox

replace each drive in the ZFS root pool one at a time until all disks have been upgraded to larger disks, next we will expand the partition size to fill the free space on each drive. 

# print out zpool status and list disks
zpool status

pool: rpool
 state: ONLINE
  scan: resilvered 2.56G in 00:00:07 with 0 errors on Wed Nov  6 18:18:56 2024
config:

        NAME                                       STATE     READ WRITE CKSUM
        rpool                                      ONLINE       0     0     0
          mirror-0                                 ONLINE       0     0     0
            ata-CT240BX500SSD1_2405E893AE3C-part3  ONLINE       0     0     0
            ata-CT240BX500SSD1_2409E89ABD4E-part3  ONLINE       0     0     0


# printout disk size
zpool list rpool

NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool    55G  2.42G  52.6G        -         -     3%     4%  1.00x    ONLINE  -

# ensure autoexpand is enabled for the root pool
zpool set autoexpand=on rpool

# get disk and partition information
# example: "ls -l /dev/disk/by-id/<by-id-name>"
# using the information from "zpool status"  we see the following disk "ata-CT240BX500SSD1_2405E893AE3C-part3" as part of "rpool" in the next command we will exclude the *-part3* partition information
ls -l /dev/disk/by-id/ata-CT240BX500SSD1_2405E893AE3C
   (/dev/disk/by-id/ata-CT240BX500SSD1_2405E893AE3C -> ../../sda)

# we see that drive "/dev/disk/by-id/ata-CT240BX500SSD1_2405E893AE3C" is linked to "/dev/disk/by-id/ata-CT240BX500SSD1_2405E893AE3C -> /dev/sda"

# using fdisk to print out the partition tables of "/dev/sda"   
fdisk -l /dev/sda

/dev/sda1       34      2047      2014 1007K BIOS boot
/dev/sda2     2048   1050623   1048576  512M EFI System
/dev/sda3  1050624 117231374 116180751 55.4G Solaris /usr & Apple ZFS

# we see that partition "/dev/sda3" is marked as "Solaris /usr & Apple ZFS" and is the last partition on the disk. Note: if there are additional partitions beyond "/dev/sda3" you will be unable to resize the partition and should not proceed further.

# next we will install "growpart" which is part of the "cloud-guest-utils" package
apt update
apt install cloud-guest-utils

# using the command "growpart" we will expand partition number 3 on "/dev/sda" to use all available space.
growpart /dev/sda 3

# After Resizing the Partition:
# Update the kernel's partition table:
partprobe /dev/sda

* repeat for each disk in the zpool *

# reboot proxmox and verify the root pool has expanded
zpool list rpool

NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool   223G  2.43G   221G        -         -     0%     1%  1.00x    ONLINE  -