[SOLVED] Have I broken my ZFS pool? Inconsistent capacities listed

lukyjay

Active Member
Aug 18, 2020
30
9
28
Hi

I have searched forums, used Google and asked AI but can't seem to figure this out. Forums and Google point toward issues with VM storage but I am not storing VMs on this pool. AI thinks the metadata may be corrupt on my pool (which I doubt...)

My physical drive capacity would be 161T~, but given I have 2x raidz2 vdevs it would be 127T~ usable.

zpool list -v suggests 127T which confirms that:
Code:
zpool list -v
NAME                                     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pve1storage                              127T   110T  17.3T        -         -    19%    86%  1.00x    ONLINE  -
  raidz2-0                              76.4T  64.2T  12.2T        -         -    17%  84.1%      -    ONLINE
    ata-WDC_WD180EDGZ-11B2DA0_3MGUHTJU  16.4T      -      -        -         -      -      -      -    ONLINE
    ata-WDC_WD180EDGZ-11B2DA0_2NGSKKWH  16.4T      -      -        -         -      -      -      -    ONLINE
    ata-WDC_WD180EDGZ-11B2DA0_2NGEZ7EB  16.4T      -      -        -         -      -      -      -    ONLINE
    ata-WDC_WD120EFBX-68B0EN0_D7HTX5DN  10.9T      -      -        -         -      -      -      -    ONLINE
    ata-ST12000VN0008-2YS101_ZV7066MA   10.9T      -      -        -         -      -      -      -    ONLINE
    ata-ST12000VN0008-2YS101_ZV705QWJ   10.9T      -      -        -         -      -      -      -    ONLINE
    ata-ST16000NT001-3LV101_ZRS20XEV    14.6T      -      -        -         -      -      -      -    ONLINE
  raidz2-1                              50.9T  45.8T  5.15T        -         -    23%  89.9%      -    ONLINE
    ata-ST12000VN0008-2YS101_ZV705Z26   10.9T      -      -        -         -      -      -      -    ONLINE
    ata-ST8000VN004-2M2101_WSD3M6PS     7.28T      -      -        -         -      -      -      -    ONLINE
    ata-ST8000VN004-2M2101_WSD3M6W1     7.28T      -      -        -         -      -      -      -    ONLINE
    ata-ST8000VN004-2M2101_WSD3M63V     7.28T      -      -        -         -      -      -      -    ONLINE
    ata-ST12000VN0008-2YS101_WV70A5K7   10.9T      -      -        -         -      -      -      -    ONLINE
    ata-ST12000NM000J-2TY103_WV701G5B   10.9T      -      -        -         -      -      -      -    ONLINE
    ata-ST12000NM000J-2TY103_WV701G19   10.9T      -      -        -         -      -      -      -    ONLINE

Yet df-h shows only 85T usable/mounted:
Code:
df -h /pve1storage
Filesystem      Size  Used Avail Use% Mounted on
pve1storage      85T   74T   12T  87% /pve1storage

From AI I have confirmed the pool is not holding snapshots or being used for anything else. Export and re-import made no difference.

It's used for my own personal media collection (TV shows, movies, etc) and a Proxmox Backup Server also uses it to store backups of another Proxmox VE host.

I have expanded my pool in the past (autoexpand=on). Is it possible that the mountpoint doesn't reflect the new size? Or could it be something else?

Thank you so much
J
 
Last edited:
On your zfs dataset you mix several different drives, that don't work.

What show zpool status -v
The can run as Mirror.
 
On your zfs dataset you mix several different drives, that don't work.

What show zpool status -v
The can run as Mirror.
Here is the output
Code:
zpool status -v
  pool: pve1storage
 state: ONLINE
  scan: scrub repaired 0B in 2 days 07:29:00 with 0 errors on Tue Jul 15 07:53:01 2025
config:

    NAME                                    STATE     READ WRITE CKSUM
    pve1storage                             ONLINE       0     0     0
      raidz2-0                              ONLINE       0     0     0
        ata-WDC_WD180EDGZ-11B2DA0_3MGUHTJU  ONLINE       0     0     0
        ata-WDC_WD180EDGZ-11B2DA0_2NGSKKWH  ONLINE       0     0     0
        ata-WDC_WD180EDGZ-11B2DA0_2NGEZ7EB  ONLINE       0     0     0
        ata-WDC_WD120EFBX-68B0EN0_D7HTX5DN  ONLINE       0     0     0
        ata-ST12000VN0008-2YS101_ZV7066MA   ONLINE       0     0     0
        ata-ST12000VN0008-2YS101_ZV705QWJ   ONLINE       0     0     0
        ata-ST16000NT001-3LV101_ZRS20XEV    ONLINE       0     0     0
      raidz2-1                              ONLINE       0     0     0
        ata-ST12000VN0008-2YS101_ZV705Z26   ONLINE       0     0     0
        ata-ST8000VN004-2M2101_WSD3M6PS     ONLINE       0     0     0
        ata-ST8000VN004-2M2101_WSD3M6W1     ONLINE       0     0     0
        ata-ST8000VN004-2M2101_WSD3M63V     ONLINE       0     0     0
        ata-ST12000VN0008-2YS101_WV70A5K7   ONLINE       0     0     0
        ata-ST12000NM000J-2TY103_WV701G5B   ONLINE       0     0     0
        ata-ST12000NM000J-2TY103_WV701G19   ONLINE       0     0     0

errors: No known data errors

What issue does mismatching drive sizes cause? Wouldn't it just use the smallest capacity of each drive in each vdev? In my example all drives in vdev one will be 10.9T and vdev 2 would be 7.28T?
 
Hhm, i see your zfs pool with hdds only is also over 80%, 87% so you must stop store more data on it.
a tripple mirror zfs special device with ssd (like kingston dc600m) will in the feature speed up you meta zfs data very.

I can't setup a mixed size zfs pool.
 
The 127 from zpool list is the raw capacity; you need to subtract the parity from it.
So, it looks more like this:
Code:
raidz2-0:  (7*10.9)-(2*10.9)=54.5
raidz2-1:  (7*7.28)-(2*7.28)=36.4
raidz2-0+raidz2-1: 54.5+36.4=90.9

The space usage properties report actual physical space available to the storage pool. The physical space can be different from the total amount of space that any contained datasets can actually use. The amount of space used in a raidz configuration depends on the characteristics of the data being written. In addition, ZFS reserves some space for internal accounting that the zfs(8) command takes into account, but the zpoolprops command does not. For non-full pools of a reasonable size, these effects should be invisible. For small pools, or pools that are close to being completely full, these discrepancies may become more noticeable.
man zpoolprops

So, have a look at: zfs list
 
  • Like
Reactions: lukyjay
The 127 from zpool list is the raw capacity; you need to subtract the parity from it.
So, it looks more like this:
Code:
raidz2-0:  (7*10.9)-(2*10.9)=54.5
raidz2-1:  (7*7.28)-(2*7.28)=36.4
raidz2-0+raidz2-1: 54.5+36.4=90.9


man zpoolprops

So, have a look at: zfs list
Ah, sorry I do know this but my math was wrong. We have a newborn in the house and my brain is fried.

What's scary is that Google's paid Gemini Pro model couldn't figure this out and has definitively told me the metadata is corrupt and that I need to rebuild the pool

Thank you