Upgrade ZFS pool to larger drives. Increase available capacity.

kamiller42

Member
May 31, 2014
18
0
21
Had mirror ZFS pool with 2 4T drives. Upgraded to 2 8T drives. Created a new pool on new drives using all available space. Performed a zfs send/recv to clone the pool to new drives. Content intact but pool & datasets show 3.56T available, the amount available on old pool.

The old pool was removed from system with a zpool export & drives removed. The new pool was renamed to the old pool's name with export & import.

I tried turning autoexpand on and off and doing "zfs online -e ..." No dice.

How do I get the pool show (allocate?) all available space, which should be close to 7T, not 3.56T?

Output of various commands:
Bash:
root@vs1:/# zpool get autoexpand
NAME     PROPERTY    VALUE   SOURCE
hs-pool autoexpand on local
root@vs1:/# zpool status
  pool: hs-pool
 state: ONLINE
  scan: none requested
config:

        NAME                                   STATE     READ WRITE CKSUM
        hs-pool                                ONLINE       0     0     0
          mirror-0                             ONLINE       0     0     0
            ata-WDC_WD80EFAX-68KNBN0_VAJAM9ML  ONLINE       0     0     0
            ata-WDC_WD80EFAX-68KNBN0_VAJBD2ML  ONLINE       0     0     0

errors: No known data errors
root@vs1:/# zpool list
NAME      SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
hs-pool 7.25T 3.46T 3.79T - 29% 47% 1.00x ONLINE -
root@vs1:/# zfs list
NAME USED AVAIL REFER MOUNTPOINT
hs-pool 3.46T 3.56T 96K /hs-pool
hs-pool/backups 85.4G 3.56T 64.7G /hs-pool/backups
hs-pool/downloads 158G 3.56T 158G /hs-pool/downloads
hs-pool/media 2.75T 3.56T 2.75T /hs-pool/media
hs-pool/users 488G 3.56T 488G /hs-pool/users
 
Size says 7.25 TB with roughly 3.5 TB used and roughly 3.5 TB available.
Are you sure that the former pool was in fact a mirror? Should have been almost full with 2.75 TB media ...
 
  • Like
Reactions: kamiller42
Check if you have snapshots that are unexpected. IIRC ZFS send/receive creates snapshots (depending on your mode of operation).
Also have you used compression and/or dedup on the other pool?
This also could explain the differences seen
 
  • Like
Reactions: kamiller42
Size says 7.25 TB with roughly 3.5 TB used and roughly 3.5 TB available.
Are you sure that the former pool was in fact a mirror? Should have been almost full with 2.75 TB media ...
It was a mirror.
Check if you have snapshots that are unexpected. IIRC ZFS send/receive creates snapshots (depending on your mode of operation).
Also have you used compression and/or dedup on the other pool?
This also could explain the differences seen
I used a snapshot to transfer & then deleted post-transfer.

Guys,

Thank you for responding. I think I'm reading the tool incorrectly, specifically "zfs list". I read AVAIL as total SIZE. When I looked at space using df, I see this.
Bash:
root@vs1:/# df -h
Filesystem               Size  Used Avail Use% Mounted on
hs-pool                  3.6T     0  3.6T   0% /hs-pool
hs-pool/backups          3.7T   65G  3.6T   2% /hs-pool/backups
hs-pool/downloads        3.8T  158G  3.6T   5% /hs-pool/downloads
hs-pool/media            6.4T  2.8T  3.6T  44% /hs-pool/media
hs-pool/users            4.1T  489G  3.6T  12% /hs-pool/users
I see the new sizes & 3.6TB available across the board. Not understanding why the size varies. (Maybe based on allotment scheme I made, which was 6 years ago.), but I'm happy to see 3.6TB.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!