PVE 6.3-4 rpool will not autoexpand

Craig Tosi

Member
Aug 19, 2016
21
1
23
60
I have PVE instance with a mirrored rpool. I recently replaced the 2 x 512 GB SSD members with 1024 GB items. The replacement completed without incident and the pool is reporting a normal status as below:-

Code:
root@pve5:/mnt# lsblk /dev/sda
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 931.5G  0 disk
├─sda1   8:1    0  1007K  0 part
├─sda2   8:2    0 465.8G  0 part
└─sda9   8:9    0     8M  0 part

root@pve5:/mnt# lsblk /dev/sdb
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdb      8:16   0 931.5G  0 disk
├─sdb1   8:17   0  1007K  0 part
├─sdb2   8:18   0 465.8G  0 part
└─sdb9   8:25   0     8M  0 part

root@pve5:/mnt# zpool status rpool
  pool: rpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: resilvered 884K in 00:00:00 with 0 errors on Tue May 11 07:59:33 2021
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdb2    ONLINE       0     0     0

errors: No known data errors

I have set the autoexpand feature to on but the pool will not expand & infact shows an expandsz value of "-"

Code:
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool   464G   129G   335G        -         -    51%    27%  1.00x    ONLINE  -

I have done a good deal of searching this forum & others that has indicated I may need to offline/online the pool members, or online them with the -e switch as below:-

zpool online -e rpool sda2

None of this has resulted in expanding the pool. It feels like I'm missing something silly & small here. Can anyone put me out of my misery?
 
Last edited:
Hi, your partition layout is still the same as on the old drives, so sda2 and sdb2 are still 465.8G in size.
So that's why your rpool can't expand beyond that.

Best is to get one of the disks out of the pool and repartition with larger sda2 and sdb2. sdb9 is now probably not on the end of the disk, so you should probably move that, or create the complete layout from scratch (I would do the last, with gdisk and sgdisk).
If you are not that familiar with repartitioning I recommend to test it in a VM and see if the steps work.
Then afterwards you can use the zpool online -e command to expand the pool.
 
  • Like
Reactions: Stoiko Ivanov
Ahhh, so autoexpand will only work if there is free space immediately after sda2/sdb2 end of partition boundary I take it? I guess that makes sense given that ZFS isn't concerned about partitions other than the sdx2 partitions. I'll see if I can move sda9 & sdb9 to the end of the disks.

Thank you for taking the time to respond. Very kind & helpful of you :)
 
  • Like
Reactions: janssensm
free space immediately after sda2/sdb2 end of partition boundary I take it?
Not exactly. The sda2/sdb2 partitions have to be enlarged, zfs doesn't take take unpartitioned free space.
After you enlarged the partition you can add that partition just as your initial replacing steps. So you will add the enlarged partition to the mirror pool. When both larger partitions are synced again, you can tell zfs with the "online" command that it can use the larger space inside that partition.

By the way, the sda9 and sdb9 are reserved zfs partition (you can look at that with gdisk), as far as I know mainly there in case you need a bit extra space when replacing drives with slightly different size. Actually trying to move that partition won't work with most partitioning tools, so I guess it's easier to delete it and recreate it at the end of the drive (sized by your preference).
So to get the layout you want, you need to take multiple (possibly destructive) steps with gdisk. That's why I recommend to test the steps in a VM.
 
Oh, ok. Thanks again for the explanation. I'm kind of thinking I might just backup all my VM's, CT's & configs & do a fresh install/restore. If I'm going to have downtime at least I don't have to remove drives that way. And downtime isn't a big issue for me. Really appreciate your help. Your Karma bank balance just edged upwards. :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!