[SOLVED] Grow individual mirrored vdev in ZPOOL

apoc

Famous Member
Oct 13, 2017
1,051
173
133
Hello all,

maybe someone with a little more ZFS-experience can give me a hint.

I am running out of space on my zpool "HDD-POOL".
Due to configuration and other reasons this pool consists only of 1 mirrored vdev (and a SLOG, but that shouldn't matter).
We are all victims of our own experience so I have chose an approach I am familiar from the "good old RAID days".

Due to physical limitations in the system (hotswap bays are full, naming convention of vdevs) I went through this procedure:
  1. detached first drive from mirror (zpool detach ...)
  2. removed drive from system
  3. inserted new drive into system
  4. adjusted /etc/zfs/vdev_id.conf -> issued a udevadm trigger
  5. attached new drive to existing vdev (zpool attach ...)
  6. after resiilver operation I have run a scrub to confirm all is fine.
Second drive got replaced exactly the same way.
Finally I have set the zpool to "autoexpand=on" according to the following doc: https://docs.oracle.com/cd/E19253-01/819-5461/gazgd/index.html

When I now issue a zpool list command it still shows the old size of 464 GB (500GB drive). It should be something around 690 GB (750GB Drive).
Code:
NAME              SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
HDD-POOL          464G   413G  50.5G        -      232G    43%    89%  1.00x    ONLINE  -

The pool itself looks fine (scrub not finalized yet, have just issued it):
Code:
sudo zpool status HDD-POOL
  pool: HDD-POOL
 state: ONLINE
  scan: scrub in progress since Fri Feb 25 09:05:47 2022
    8.73G scanned at 1.09G/s, 684K issued at 85.5K/s, 413G total
    0B repaired, 0.00% done, no estimated completion time
config:

    NAME             STATE     READ WRITE CKSUM
    HDD-POOL         ONLINE       0     0     0
      mirror-0       ONLINE       0     0     0
        C1-S5        ONLINE       0     0     0
        C1-S4        ONLINE       0     0     0
    logs    
      RMS-200-part3  ONLINE       0     0     0

errors: No known data errors

What am I missing? Is there another step I have forgot?
Thanks for your insights
All the best
 
I just created a test environment where it works:

Code:
root@dyn-043 ~ > zpool create testpool mirror /dev/sdb /dev/sdc

root@dyn-043 ~ > zpool list -v testpool
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool   3.75G   100K  3.75G        -         -     0%     0%  1.00x    ONLINE  -
  mirror   3.75G   100K  3.75G        -         -     0%  0.00%      -  ONLINE
    sdb        -      -      -        -         -      -      -      -  ONLINE
    sdc        -      -      -        -         -      -      -      -  ONLINE

root@dyn-043 ~ > zpool set autoexpand=on testpool

I now resized the virtual disks by 1 GB each:

Code:
root@dyn-043 ~ > dmesg | tail -6
[  494.407516] sd 2:0:0:1: Capacity data has changed
[  494.407962] sd 2:0:0:1: [sdb] 10485760 512-byte logical blocks: (5.37 GB/5.00 GiB)
[  494.408269] sdb: detected capacity change from 4294967296 to 5368709120
[  501.400259] sd 2:0:0:2: Capacity data has changed
[  501.400553] sd 2:0:0:2: [sdc] 10485760 512-byte logical blocks: (5.37 GB/5.00 GiB)
[  501.400675] sdc: detected capacity change from 4294967296 to 5368709120

Still old numbers:

Code:
root@dyn-043 ~ > zpool list -v testpool
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool   3.75G   114K  3.75G        -         -     0%     0%  1.00x    ONLINE  -
  mirror   3.75G   114K  3.75G        -         -     0%  0.00%      -  ONLINE
    sdb        -      -      -        -         -      -      -      -  ONLINE
    sdc        -      -      -        -         -      -      -      -  ONLINE

Force the change

Code:
root@dyn-043 ~ > zpool online -e testpool /dev/sdb /dev/sdc

Looks good:

Code:
root@dyn-043 ~ > zpool list -v testpool
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
testpool   4.75G   182K  4.75G        -         -     0%     0%  1.00x    ONLINE  -
  mirror   4.75G   182K  4.75G        -         -     0%  0.00%      -  ONLINE
    sdb        -      -      -        -         -      -      -      -  ONLINE
    sdc        -      -      -        -         -      -      -      -  ONLINE

Maybe that works also for you.
 
  • Like
Reactions: Dunuin