ZFS - used space - normal disk growth?

vbx89ps

New Member
Jan 17, 2025
1
0
1
Hi everyone,

I've recently installed Proxmox VE 8.3.2 on a ZFS mirror and set some containers and some VMs.
I've read many Threads about how ZFS shows the used space but I'm still not sure if I set everything well and if the ZFS vols will keep growing till the physical disks are full (SSDs), although the free space in the VMs and in the containers is ok.

Here some data of a VM to explain the situation:

- VM Disk size: 450G (no discard)
- I use sanoid to regularly take snapshosts
- Output of df -Th in the VM:
Bash:
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda1      ext4      442G   85G  335G  21% /

- Outputs on the host about the VM:
Bash:
root@pve:~# zfs get all dpool/DATA/ctvmvols/vm-2401-disk-0 | grep used
dpool/DATA/ctvmvols/vm-2401-disk-0  used                  628G                      -
dpool/DATA/ctvmvols/vm-2401-disk-0  usedbysnapshots       90.7G                     -
dpool/DATA/ctvmvols/vm-2401-disk-0  usedbydataset         80.4G                     -
dpool/DATA/ctvmvols/vm-2401-disk-0  usedbychildren        0B                        -
dpool/DATA/ctvmvols/vm-2401-disk-0  usedbyrefreservation  457G                      -
dpool/DATA/ctvmvols/vm-2401-disk-0  logicalused           198G                      -


- List of the VM snapshots:
Bash:
root@pve:~# zfs list -rt snapshot dpool/DATA/ctvmvols/vm-2401-disk-0
NAME                                                                     USED  AVAIL  REFER  MOUNTPOINT
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-11_00:00:43_daily   3.82M      -  4.24G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-12_00:00:43_daily   2.75M      -  4.30G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-13_00:00:43_daily   4.92M      -  4.49G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-14_00:00:43_daily    483M      -  5.22G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-15_00:00:43_daily   4.37G      -  85.7G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-16_00:00:43_daily   12.9M      -   163G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-16_09:00:43_hourly  3.41M      -   163G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-16_10:00:39_hourly  1.97M      -   163G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-16_11:00:43_hourly  2.11M      -   163G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-16_12:00:43_hourly  2.14M      -   163G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-16_13:00:28_hourly  14.6M      -   163G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-16_14:00:13_hourly  2.11M      -  79.5G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-16_15:00:11_hourly  1.73M      -  79.5G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-16_16:00:11_hourly  2.00M      -  80.4G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-16_17:00:11_hourly  1.87M      -  80.4G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-16_18:00:11_hourly  1.87M      -  80.4G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-16_19:00:11_hourly  1.86M      -  80.4G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-16_20:00:11_hourly  1.85M      -  80.4G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-16_21:00:11_hourly  1.85M      -  80.4G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-16_22:00:11_hourly  1.85M      -  80.4G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-16_23:00:11_hourly  2.00M      -  80.4G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-17_00:00:11_daily      0B      -  80.4G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-17_00:00:11_hourly     0B      -  80.4G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-17_01:00:11_hourly  1.67M      -  80.4G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-17_02:00:11_hourly  1.81M      -  80.4G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-17_03:00:11_hourly  1.93M      -  80.4G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-17_04:00:11_hourly  2.05M      -  80.4G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-17_05:00:11_hourly  1.80M      -  80.4G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-17_06:00:11_hourly  1.83M      -  80.4G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-17_07:00:11_hourly  2.00M      -  80.4G  -
dpool/DATA/ctvmvols/vm-2401-disk-0@autosnap_2025-01-17_08:00:11_hourly  1.88M      -  80.4G  -

The showed used space is 628G (usedbysnapshots + usedbydataset + usedbyrefreservation).

Finally my doubts and questions:

- The refreservation is 457G and it is used (as far as I know) to guarantee that the VMs has always use of the assigned disk size. That's ok but then why is the dataset size (80G) added to calculate the used space? (the 80G should belong to 457G or not?)

- I've once copied approximately 80G to the VM and after a day I've deleted those data. Is that the reason because the usedbysnapshots size is 90.7G? and if yes, when these snapshots will be deleted (retention time), will be this space freed?

- Since I'm not using the "discard" option on the disk, if I create files and delete files (normal operations), will the physical disk fulled, although the VM as enough free space?

- Can I use the "discard" option together with "refreservation"? or is it mandatory to set "refreservation" to none?

- I've tried to use the "discard" option (then shutdown and start) and then I have run a ftrim -av on the VM and then on the host. I didn't notice any changes, maybe I didn't do it right (and I don't know if I actually need that option)

- what are your preferred settings (for disks and storage) when using proxmox on ZFS?

Thank you for reading till here and for helping in clearing my doubts :)

vbx89ps

P.S. let me know if you need more data