[SOLVED] df shows wrong Size/Used/Avail for zfs pool

klauskurz

Renowned Member
Mar 8, 2013
12
5
68
Hi!
I am using ZFS as boot and storage system. All packages are on the newest version.

df -h shows wrong Size/Used/Avail. Actually the Size and Avail are the same.
Code:
zpool list -v
NAME            SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool           476G  76.5G   400G        -         -     4%    16%  1.00x    ONLINE  -
  mirror        476G  76.5G   400G        -         -     4%  16.1%      -  ONLINE
    nvme0n1p2      -      -      -        -         -      -      -      -  ONLINE
    nvme1n1p2      -      -      -        -         -      -      -      -  ONLINE
Code:
zfs list
NAME                              USED  AVAIL     REFER  MOUNTPOINT
rpool                            80.2G   381G       96K  /rpool
rpool/ROOT                       2.33G   381G       96K  /rpool/ROOT
rpool/ROOT/pve-1                 2.33G   381G     2.33G  /
rpool/backup                     73.3G   381G      120K  /rpool/backup
rpool/backup/v01                 10.7G   381G       96K  /rpool/backup/v01
rpool/backup/v01/vm-1001-disk-0  10.7G   381G     3.68G  -
rpool/backup/v02                 4.18G   381G       96K  /rpool/backup/v02
rpool/backup/v02/vm-1002-disk-0  4.18G   381G     3.87G  -
rpool/backup/v51                 51.8G   381G       96K  /rpool/backup/v51
rpool/backup/v51/vm-8051-disk-0  51.8G   381G     40.9G  -
rpool/backup/v81                 6.61G   381G       96K  /rpool/backup/v81
rpool/backup/v81/vm-3081-disk-0  6.61G   381G     5.52G  -
rpool/data                         96K   381G       96K  /rpool/data
rpool/swap                       4.25G   385G     14.6M  -
rpool/thin                        205M   381G       96K  /rpool/thin
rpool/thin/vm-100-disk-0          158M   381G     40.7G  -
rpool/thin/vm-100-disk-1         2.52M   381G     2.52M  -
rpool/thin/vm-101-disk-0         44.2M   381G     5.51G  -
Code:
df -h /rpool/*
Filesystem      Size  Used Avail Use% Mounted on
rpool/backup    381G  128K  381G   1% /rpool/backup
rpool/data      381G  128K  381G   1% /rpool/data
rpool/ROOT      381G  128K  381G   1% /rpool/ROOT
rpool/thin      381G  128K  381G   1% /rpool/thin

So, the proxmox interface shows also wrong values:

Usage 0.05% (204.53 MiB of 381.14 GiB)

Any help is very welcome.
pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve)
pve-manager: 6.2-4 (running version: 6.2-4/9824574a)
pve-kernel-5.4: 6.2-2
pve-kernel-helper: 6.2-2
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.41-1-pve: 5.4.41-1
pve-kernel-4.15: 5.4-9
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-2-pve: 5.0.21-7
pve-kernel-4.15.18-21-pve: 4.15.18-48
ceph: 12.2.13-pve1
ceph-fuse: 12.2.13-pve1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 2.0.1-1+pve8
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-1
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-2
libpve-guest-common-perl: 3.0-10
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.2-1
lxcfs: 4.0.3-pve2
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-1
pve-cluster: 6.1-8
pve-container: 3.1-6
pve-docs: 6.2-4
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.1-2
pve-firmware: 3.1-1
pve-ha-manager: 3.0-9
pve-i18n: 2.1-2
pve-qemu-kvm: 5.0.0-2
pve-xtermjs: 4.3.0-1
pve-zsync: 2.0-3
qemu-server: 6.2-2
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.4-pve1
Code:
NAME                                                                   USED     REFER  LREFER  RATIO
rpool/backup/v01/vm-1001-disk-0@autosnap_2020-06-04_20:00:00_yearly      0B     3.68G   4.50G  1.23x
rpool/backup/v01/vm-1001-disk-0@autosnap_2020-06-04_20:00:00_monthly     0B     3.68G   4.50G  1.23x
rpool/backup/v01/vm-1001-disk-0@autosnap_2020-06-04_20:00:00_daily       0B     3.68G   4.50G  1.23x
rpool/backup/v01/vm-1001-disk-0@autosnap_2020-06-05_00:00:00_daily     291M     3.68G   4.50G  1.23x
rpool/backup/v01/vm-1001-disk-0@autosnap_2020-06-06_00:00:00_daily     354M     3.68G   4.50G  1.22x
rpool/backup/v01/vm-1001-disk-0@autosnap_2020-06-06_23:00:00_hourly    218M     3.68G   4.50G  1.22x
rpool/backup/v01/vm-1001-disk-0@autosnap_2020-06-07_00:00:00_daily       0B     3.68G   4.50G  1.22x
rpool/backup/v01/vm-1001-disk-0@autosnap_2020-06-07_00:00:00_hourly      0B     3.68G   4.50G  1.22x
rpool/backup/v01/vm-1001-disk-0@autosnap_2020-06-07_01:00:00_hourly    200M     3.68G   4.50G  1.22x
rpool/backup/v01/vm-1001-disk-0@autosnap_2020-06-07_02:00:00_hourly    201M     3.68G   4.50G  1.22x
rpool/backup/v01/vm-1001-disk-0@autosnap_2020-06-07_03:00:00_hourly    200M     3.68G   4.50G  1.22x
rpool/backup/v01/vm-1001-disk-0@autosnap_2020-06-07_04:00:00_hourly    208M     3.68G   4.50G  1.22x
rpool/backup/v01/vm-1001-disk-0@autosnap_2020-06-07_22:00:00_hourly      0B     3.68G   4.50G  1.23x
rpool/backup/v02/vm-1002-disk-0@autosnap_2020-06-04_15:45:00_yearly    392K     3.82G   4.75G  1.25x
rpool/backup/v02/vm-1002-disk-0@autosnap_2020-06-04_15:45:00_monthly     0B     3.82G   4.75G  1.25x
rpool/backup/v02/vm-1002-disk-0@autosnap_2020-06-04_15:45:00_daily       0B     3.82G   4.75G  1.25x
rpool/backup/v02/vm-1002-disk-0@autosnap_2020-06-05_00:00:00_daily    34.8M     3.83G   4.76G  1.25x
rpool/backup/v02/vm-1002-disk-0@autosnap_2020-06-06_00:00:00_daily    35.6M     3.84G   4.77G  1.25x
rpool/backup/v02/vm-1002-disk-0@autosnap_2020-06-06_23:00:00_hourly   8.29M     3.86G   4.79G  1.25x
rpool/backup/v02/vm-1002-disk-0@autosnap_2020-06-07_00:00:00_daily       0B     3.86G   4.79G  1.25x
rpool/backup/v02/vm-1002-disk-0@autosnap_2020-06-07_00:00:00_hourly      0B     3.86G   4.79G  1.25x
rpool/backup/v02/vm-1002-disk-0@autosnap_2020-06-07_01:00:00_hourly   1.09M     3.86G   4.79G  1.24x
rpool/backup/v02/vm-1002-disk-0@autosnap_2020-06-07_02:00:00_hourly   1.09M     3.86G   4.79G  1.24x
rpool/backup/v02/vm-1002-disk-0@autosnap_2020-06-07_03:00:00_hourly   1.09M     3.86G   4.79G  1.24x
rpool/backup/v02/vm-1002-disk-0@autosnap_2020-06-07_04:00:00_hourly   1.09M     3.86G   4.79G  1.24x
rpool/backup/v02/vm-1002-disk-0@autosnap_2020-06-07_05:00:00_hourly   1.10M     3.86G   4.79G  1.24x
rpool/backup/v02/vm-1002-disk-0@autosnap_2020-06-07_22:00:00_hourly      0B     3.87G   4.80G  1.24x
rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-05_07:45:00_yearly      0B     40.4G   50.0G  1.24x
rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-05_07:45:00_monthly     0B     40.4G   50.0G  1.24x
rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-05_07:45:00_daily       0B     40.4G   50.0G  1.24x
rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-06_00:00:00_daily     792M     40.6G   50.1G  1.24x
rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-06_13:00:00_hourly    653M     40.7G   50.3G  1.24x
rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-06_23:00:00_hourly    367M     40.7G   50.3G  1.24x
rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-07_00:00:00_daily       0B     40.7G   50.3G  1.24x
rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-07_00:00:00_hourly      0B     40.7G   50.3G  1.24x
rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-07_01:00:00_hourly    218M     40.7G   50.3G  1.24x
rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-07_02:00:00_hourly    204M     40.7G   50.3G  1.24x
rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-07_03:00:00_hourly    193M     40.7G   50.3G  1.24x
rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-07_04:00:00_hourly    166M     40.7G   50.3G  1.24x
rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-07_05:00:00_hourly    156M     40.7G   50.3G  1.24x
rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-07_06:00:00_hourly    154M     40.7G   50.3G  1.24x
rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-07_07:00:00_hourly    166M     40.7G   50.3G  1.24x
rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-07_08:00:00_hourly    179M     40.8G   50.3G  1.24x
rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-07_22:00:00_hourly      0B     40.9G   50.6G  1.24x
rpool/backup/v81/vm-3081-disk-0@autosnap_2020-06-07_07:00:00_yearly      0B     5.52G   6.64G  1.20x
rpool/backup/v81/vm-3081-disk-0@autosnap_2020-06-07_07:00:00_monthly     0B     5.52G   6.64G  1.20x
rpool/backup/v81/vm-3081-disk-0@autosnap_2020-06-07_07:00:00_daily       0B     5.52G   6.64G  1.20x
rpool/backup/v81/vm-3081-disk-0@autosnap_2020-06-07_07:00:00_hourly      0B     5.52G   6.64G  1.20x
rpool/backup/v81/vm-3081-disk-0@autosnap_2020-06-07_08:00:00_hourly   25.3M     5.52G   6.64G  1.20x
rpool/backup/v81/vm-3081-disk-0@autosnap_2020-06-07_09:00:00_hourly   18.0M     5.52G   6.64G  1.20x
rpool/backup/v81/vm-3081-disk-0@autosnap_2020-06-07_10:00:00_hourly   18.2M     5.52G   6.64G  1.20x
rpool/backup/v81/vm-3081-disk-0@autosnap_2020-06-07_11:00:00_hourly   34.5M     5.52G   6.64G  1.21x
 
Hi,
could you elaborate where the wrong values are shown in Proxmox and which values are wrong?

I think the reason is that df only gives the USEDDS value (use zfs list -o space to see what I'm talking about), so the size of child datasets (e.g. rpool/backup/v81/vm-3081-disk-0) is not included.

Code:
zfs list
NAME                              USED  AVAIL     REFER  MOUNTPOINT
---snip---
rpool/thin                        205M   381G       96K  /rpool/thin
---snip---
So, the proxmox interface shows also wrong values:

Usage 0.05% (204.53 MiB of 381.14 GiB)

Isn't this the same value as ZFS reports here?
 
Hi Fabian,
than you for the quick answer.
Isn't this the same value as ZFS reports here?

Yes, for "AVAIL" in zfs list the size ist correct and the same as "Avail" in df: 381G.
The "Size" in df ist wrong: (381G). This sould be something like 476 GB.
The Size in the Webinterface is also wrong: (204.53 MiB of 381.14 GiB) which leads to 0.05% usage.
ZFS zpool reports: Size: "476G", zfs list Avail "381G": which calculates the usage ~ 95GB. There should be displayed aproxymatley:
~ 20% ( 381 GiB of 476 GiB) ).
The display is:
0.05% (204.53 MiB of 381.14 GiB) .

PS: Just checked on a differen system: The problem is also the same there with the ZFS size.
 
Hi Fabian,
than you for the quick answer.


Yes, for "AVAIL" in zfs list the size ist correct and the same as "Avail" in df: 381G.
The "Size" in df ist wrong: (381G). This sould be something like 476 GB.
The Size in the Webinterface is also wrong: (204.53 MiB of 381.14 GiB) which leads to 0.05% usage.
ZFS zpool reports: Size: "476G", zfs list Avail "381G": which calculates the usage ~ 95GB. There should be displayed aproxymatley:
~ 20% ( 381 GiB of 476 GiB) ).
The display is:
0.05% (204.53 MiB of 381.14 GiB) .

PS: Just checked on a differen system: The problem is also the same there with the ZFS size.

I see what you mean now. One could display the pool size minus the used space, but knowing the available space for the dataset is much more useful, no? Note also that a ZFS storage in PVE is a ZFS file system, not a pool. Therefore, the used value does not include space used by other datasets. The total size is calculated as used + available in the WebGUI, no df is involved there. Also relevant, from the zpool man page:
Code:
free    The amount of free space available in the pool.  By contrast, the zfs(8) available property describes how much new data can be written to ZFS filesystems/volumes.  The zpool free property is not generally useful for this
             purpose, and can be substantially more than the zfs available space. This discrepancy is due to several factors, including raidz party; zfs reservation, quota, refreservation, and refquota properties; and space set aside
             by spa_slop_shift (see zfs-module-parameters(5) for more information).


That said, when you are interested in the usage for your pools, you can go to [NODE] > Disks > ZFS in the GUI.
 
OK, that makes perfectly sense.

Still, I think the display is missleading on ZFS:

Usage-ZFS-01.png

while:

Usage-ZFS-02.png

Usage-ZFS-03.png

...and if it is a directory, also on ZFS (rpool/ROOT/pve-1), the display seems to be right:

Usage-ZFS-04.png.pngUsage-ZFS-05.png

Maybe a topic for an enhancement?
 
Hi there!
Thank you all for the answers, which lead me to the right interpretation of the numbers for zfs filesystems.
It was my mistake. Everything is OK.

Explanation:

All the zvols in rpool/thin are all clones from rpool/backup. So they do not take up any space. Which is correctly reported by zfs and the webinterface.

Code:
zfs get origin rpool/thin/vm-100-disk-0
rpool/thin/vm-100-disk-0  origin    rpool/backup/v51/vm-8051-disk-0@autosnap_2020-06-06_13:00:00_hourly

To see the size for rpool/backup in the webinterface I added in the /etc/pve/storage.cfg:
Code:
zfspool: zpool-pve-backup
        pool rpool/backup
        content images
        sparse 1

So for rpool/backup it shows:

1591708664341.png

Best regards
Klaus
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!