[SOLVED] After upgrading from Proxmox 8 to 9 my zfs pool seems smaller

Dec 2, 2021
20
5
8
39
After upgrading from Proxmox 8 to 9 my zfs pool seems smaller in "df -h" output, included the used space 128K?

Code:
df -h
Filesystem        Size  Used Avail Use% Mounted on<br>
zmedia            4.4T  128K  4.4T   1% /zmedia

While the web-interface is showing the correct use storage.
Screenshot From 2025-08-05 19-10-28.png


And same for my other zpool where my system is installed on, 128K used
Code:
Filesystem        Size  Used Avail Use% Mounted on
udev               31G     0   31G   0% /dev
tmpfs             6.2G  2.5M  6.2G   1% /run
rpool/ROOT/pve-1  3.6T  232G  3.3T   7% /
tmpfs              31G   46M   31G   1% /dev/shm
efivarfs          128K   56K   68K  46% /sys/firmware/efi/efivars
tmpfs             5.0M     0  5.0M   0% /run/lock
tmpfs             1.0M     0  1.0M   0% /run/credentials/systemd-journald.service
tmpfs              31G     0   31G   0% /tmp
rpool             3.3T  128K  3.3T   1% /rpool
rpool/var-lib-vz  3.3T  128K  3.3T   1% /var/lib/vz
rpool/ROOT        3.3T  128K  3.3T   1% /rpool/ROOT
rpool/data        3.3T  128K  3.3T   1% /rpool/data
zmedia            4.4T  128K  4.4T   1% /zmedia
/dev/fuse         128M   48K  128M   1% /etc/pve
tmpfs             1.0M     0  1.0M   0% /run/credentials/getty@tty1.service
tmpfs             6.2G  8.0K  6.2G   1% /run/user/0

When I run "zfs list" the info is correct?
Code:
root@pve:~# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
rpool                  231G  3.29T   104K  /rpool
rpool/ROOT             231G  3.29T    96K  /rpool/ROOT
rpool/ROOT/pve-1       231G  3.29T   231G  /
rpool/data              96K  3.29T    96K  /rpool/data
rpool/var-lib-vz       104K  3.29T   104K  /var/lib/vz
zmedia                17.4T  4.33T    96K  /zmedia
zmedia/vm-100-disk-0  17.4T  13.3T  8.42T  -

Code:
root@pve:~# zpool list -v
NAME                                   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool                                 3.62T   231G  3.40T        -         -     4%     6%  1.00x    ONLINE  -
  mirror-0                            3.62T   231G  3.40T        -         -     4%  6.23%      -    ONLINE
    nvme-eui.0025384141400676-part3   3.64T      -      -        -         -      -      -      -    ONLINE
    nvme-eui.0025384151b3f8db-part3   3.64T      -      -        -         -      -      -      -    ONLINE
zmedia                                21.8T  8.42T  13.4T        -         -     4%    38%  1.00x    ONLINE  -
  mirror-0                            21.8T  8.42T  13.4T        -         -     4%  38.6%      -    ONLINE
    ata-WDC_WUH722424ALE6L4_65G1207L  21.8T      -      -        -         -      -      -      -    ONLINE
    ata-WDC_WUH722424ALE6L4_65G1GSDL  21.8T      -      -        -         -      -      -      -    ONLINE

Not sure how to correct this, how to make the "df -h" output match the "zfs list" output since the latter is showing the correction information for my zfs pools? As far as I can remember before the upgrade to Proxmox 9 it was matching?
 
Last edited: