ZFS Pool Usage Reporting Higher than Actual VM Disk Usage

omerta747

New Member
Aug 29, 2025
4
0
1
Hello everyone,
I am experiencing an issue with my ZFS pool usage reporting.
My ZFS pool "Crucial" is showing 71.69% used (663.64 GB of 925.73 GB). However, this pool only hosts 3 virtual machine disks:
  • VM100: 274.88 GB
  • VM101: 90.19 GB
  • VM102: 90.19 GB
This gives a total of 455.26 GB, which is significantly less than the 663 GB reported as used by ZFS.
I would like to understand why ZFS is reporting this discrepancy and how I can free up space.

Questions:​

  1. Could this be related to snapshots, ZFS metadata, or some other overhead?
  2. How can I properly check where the space is going?
  3. What would be the correct way to reclaim/optimize usage without risking data loss?

Current setup:​

  • Proxmox VE version: pve-manager/8.2.2
  • zfs-2.2.3-pve2 created on SSD/raidz 1-0
  • Output of zpool list, zfs list -o space, and zpool status:
 

Attachments

  • Captura de pantalla 2025-08-29 a la(s) 9.03.59 a. m..png
    Captura de pantalla 2025-08-29 a la(s) 9.03.59 a. m..png
    11.4 KB · Views: 7
  • Captura de pantalla 2025-08-29 a la(s) 9.04.35 a. m..png
    Captura de pantalla 2025-08-29 a la(s) 9.04.35 a. m..png
    30 KB · Views: 7
  • Captura de pantalla 2025-08-29 a la(s) 9.04.48 a. m..png
    Captura de pantalla 2025-08-29 a la(s) 9.04.48 a. m..png
    30 KB · Views: 6
VMs are stored in datasets of type volume (zvol) which provide blockdevs. In any raidz pool they need to store parity block as well. That is most likely what eats away the additional space. How much it is depends on the raidzX level, the volblocksize of the zvol and the ashift.

See https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_zfs_raid_considerations

If you run zfs get all ZFS-SSD/vm-100-disk-0 you will get quite a lot of details. if you post the output, please copy&paste it into [CODE][/CODE] blocks or use the format buttons of the editor (</>)
 
Hi Aaron,

Thank you for pointing me in the right direction. I ran the command as suggested, here are the results for the three zvols:

Code:
root@zeus:~# zfs get all ZFS-SSD/vm-100-disk-0
NAME                   PROPERTY              VALUE                     SOURCE
ZFS-SSD/vm-100-disk-0  type                  volume                    -
ZFS-SSD/vm-100-disk-0  creation              Wed Nov 20 14:15 2024     -
ZFS-SSD/vm-100-disk-0  used                  407G                      -
ZFS-SSD/vm-100-disk-0  available             496G                      -
ZFS-SSD/vm-100-disk-0  referenced            147G                      -
ZFS-SSD/vm-100-disk-0  compressratio         1.62x                     -
ZFS-SSD/vm-100-disk-0  reservation           none                      default
ZFS-SSD/vm-100-disk-0  volsize               256G                      local
ZFS-SSD/vm-100-disk-0  volblocksize          16K                       default
ZFS-SSD/vm-100-disk-0  checksum              on                        default
ZFS-SSD/vm-100-disk-0  compression           on                        inherited from ZFS-SSD
ZFS-SSD/vm-100-disk-0  readonly              off                       default
ZFS-SSD/vm-100-disk-0  createtxg             783                       -
ZFS-SSD/vm-100-disk-0  copies                1                         default
ZFS-SSD/vm-100-disk-0  refreservation        260G                      received
ZFS-SSD/vm-100-disk-0  guid                  2875474721549488941       -
ZFS-SSD/vm-100-disk-0  primarycache          all                       default
ZFS-SSD/vm-100-disk-0  secondarycache        all                       default
ZFS-SSD/vm-100-disk-0  usedbysnapshots       54.3M                     -
ZFS-SSD/vm-100-disk-0  usedbydataset         147G                      -
ZFS-SSD/vm-100-disk-0  usedbychildren        0B                        -
ZFS-SSD/vm-100-disk-0  usedbyrefreservation  260G                      -
ZFS-SSD/vm-100-disk-0  logbias               latency                   default
ZFS-SSD/vm-100-disk-0  objsetid              5133                      -
ZFS-SSD/vm-100-disk-0  dedup                 off                       default
ZFS-SSD/vm-100-disk-0  mlslabel              none                      default
ZFS-SSD/vm-100-disk-0  sync                  standard                  default
ZFS-SSD/vm-100-disk-0  refcompressratio      1.62x                     -
ZFS-SSD/vm-100-disk-0  written               22.1M                     -
ZFS-SSD/vm-100-disk-0  logicalused           192G                      -
ZFS-SSD/vm-100-disk-0  logicalreferenced     192G                      -
ZFS-SSD/vm-100-disk-0  volmode               default                   default
ZFS-SSD/vm-100-disk-0  snapshot_limit        none                      default
ZFS-SSD/vm-100-disk-0  snapshot_count        none                      default
ZFS-SSD/vm-100-disk-0  snapdev               hidden                    default
ZFS-SSD/vm-100-disk-0  context               none                      default
ZFS-SSD/vm-100-disk-0  fscontext             none                      default
ZFS-SSD/vm-100-disk-0  defcontext            none                      default
ZFS-SSD/vm-100-disk-0  rootcontext           none                      default
ZFS-SSD/vm-100-disk-0  redundant_metadata    all                       default
ZFS-SSD/vm-100-disk-0  encryption            off                       default
ZFS-SSD/vm-100-disk-0  keylocation           none                      default
ZFS-SSD/vm-100-disk-0  keyformat             none                      default
ZFS-SSD/vm-100-disk-0  pbkdf2iters           0                         default
ZFS-SSD/vm-100-disk-0  snapshots_changed     Fri Aug 29  8:12:23 2025  -

Code:
root@zeus:~# zfs get all ZFS-SSD/vm-101-disk-0
NAME                   PROPERTY              VALUE                     SOURCE
ZFS-SSD/vm-101-disk-0  type                  volume                    -
ZFS-SSD/vm-101-disk-0  creation              Wed Nov 20 14:56 2024     -
ZFS-SSD/vm-101-disk-0  used                  100G                      -
ZFS-SSD/vm-101-disk-0  available             321G                      -
ZFS-SSD/vm-101-disk-0  referenced            14.9G                     -
ZFS-SSD/vm-101-disk-0  compressratio         1.45x                     -
ZFS-SSD/vm-101-disk-0  reservation           none                      default
ZFS-SSD/vm-101-disk-0  volsize               84G                       local
ZFS-SSD/vm-101-disk-0  volblocksize          16K                       default
ZFS-SSD/vm-101-disk-0  checksum              on                        default
ZFS-SSD/vm-101-disk-0  compression           on                        inherited from ZFS-SSD
ZFS-SSD/vm-101-disk-0  readonly              off                       default
ZFS-SSD/vm-101-disk-0  createtxg             1269                      -
ZFS-SSD/vm-101-disk-0  copies                1                         default
ZFS-SSD/vm-101-disk-0  refreservation        85.3G                     received
ZFS-SSD/vm-101-disk-0  guid                  2274887858380771296       -
ZFS-SSD/vm-101-disk-0  primarycache          all                       default
ZFS-SSD/vm-101-disk-0  secondarycache        all                       default
ZFS-SSD/vm-101-disk-0  usedbysnapshots       12.4M                     -
ZFS-SSD/vm-101-disk-0  usedbydataset         14.9G                     -
ZFS-SSD/vm-101-disk-0  usedbychildren        0B                        -
ZFS-SSD/vm-101-disk-0  usedbyrefreservation  85.3G                     -
ZFS-SSD/vm-101-disk-0  logbias               latency                   default
ZFS-SSD/vm-101-disk-0  objsetid              6407                      -
ZFS-SSD/vm-101-disk-0  dedup                 off                       default
ZFS-SSD/vm-101-disk-0  mlslabel              none                      default
ZFS-SSD/vm-101-disk-0  sync                  standard                  default
ZFS-SSD/vm-101-disk-0  refcompressratio      1.45x                     -
ZFS-SSD/vm-101-disk-0  written               8.01M                     -
ZFS-SSD/vm-101-disk-0  logicalused           18.8G                     -
ZFS-SSD/vm-101-disk-0  logicalreferenced     18.7G                     -
ZFS-SSD/vm-101-disk-0  volmode               default                   default
ZFS-SSD/vm-101-disk-0  snapshot_limit        none                      default
ZFS-SSD/vm-101-disk-0  snapshot_count        none                      default
ZFS-SSD/vm-101-disk-0  snapdev               hidden                    default
ZFS-SSD/vm-101-disk-0  context               none                      default
ZFS-SSD/vm-101-disk-0  fscontext             none                      default
ZFS-SSD/vm-101-disk-0  defcontext            none                      default
ZFS-SSD/vm-101-disk-0  rootcontext           none                      default
ZFS-SSD/vm-101-disk-0  redundant_metadata    all                       default
ZFS-SSD/vm-101-disk-0  encryption            off                       default
ZFS-SSD/vm-101-disk-0  keylocation           none                      default
ZFS-SSD/vm-101-disk-0  keyformat             none                      default
ZFS-SSD/vm-101-disk-0  pbkdf2iters           0                         default
ZFS-SSD/vm-101-disk-0  snapshots_changed     Fri Aug 29  8:12:38 2025  -


Code:
root@zeus:~# zfs get all ZFS-SSD/vm-102-disk-0
NAME                   PROPERTY              VALUE                     SOURCE
ZFS-SSD/vm-102-disk-0  type                  volume                    -
ZFS-SSD/vm-102-disk-0  creation              Wed Nov 20 15:24 2024     -
ZFS-SSD/vm-102-disk-0  used                  119G                      -
ZFS-SSD/vm-102-disk-0  available             321G                      -
ZFS-SSD/vm-102-disk-0  referenced            33.8G                     -
ZFS-SSD/vm-102-disk-0  compressratio         1.75x                     -
ZFS-SSD/vm-102-disk-0  reservation           none                      default
ZFS-SSD/vm-102-disk-0  volsize               84G                       local
ZFS-SSD/vm-102-disk-0  volblocksize          16K                       default
ZFS-SSD/vm-102-disk-0  checksum              on                        default
ZFS-SSD/vm-102-disk-0  compression           on                        inherited from ZFS-SSD
ZFS-SSD/vm-102-disk-0  readonly              off                       default
ZFS-SSD/vm-102-disk-0  createtxg             1618                      -
ZFS-SSD/vm-102-disk-0  copies                1                         default
ZFS-SSD/vm-102-disk-0  refreservation        85.3G                     received
ZFS-SSD/vm-102-disk-0  guid                  15789919175917899947      -
ZFS-SSD/vm-102-disk-0  primarycache          all                       default
ZFS-SSD/vm-102-disk-0  secondarycache        all                       default
ZFS-SSD/vm-102-disk-0  usedbysnapshots       13.6M                     -
ZFS-SSD/vm-102-disk-0  usedbydataset         33.8G                     -
ZFS-SSD/vm-102-disk-0  usedbychildren        0B                        -
ZFS-SSD/vm-102-disk-0  usedbyrefreservation  85.3G                     -
ZFS-SSD/vm-102-disk-0  logbias               latency                   default
ZFS-SSD/vm-102-disk-0  objsetid              1029                      -
ZFS-SSD/vm-102-disk-0  dedup                 off                       default
ZFS-SSD/vm-102-disk-0  mlslabel              none                      default
ZFS-SSD/vm-102-disk-0  sync                  standard                  default
ZFS-SSD/vm-102-disk-0  refcompressratio      1.75x                     -
ZFS-SSD/vm-102-disk-0  written               6.51M                     -
ZFS-SSD/vm-102-disk-0  logicalused           47.8G                     -
ZFS-SSD/vm-102-disk-0  logicalreferenced     47.8G                     -
ZFS-SSD/vm-102-disk-0  volmode               default                   default
ZFS-SSD/vm-102-disk-0  snapshot_limit        none                      default
ZFS-SSD/vm-102-disk-0  snapshot_count        none                      default
ZFS-SSD/vm-102-disk-0  snapdev               hidden                    default
ZFS-SSD/vm-102-disk-0  context               none                      default
ZFS-SSD/vm-102-disk-0  fscontext             none                      default
ZFS-SSD/vm-102-disk-0  defcontext            none                      default
ZFS-SSD/vm-102-disk-0  rootcontext           none                      default
ZFS-SSD/vm-102-disk-0  redundant_metadata    all                       default
ZFS-SSD/vm-102-disk-0  encryption            off                       default
ZFS-SSD/vm-102-disk-0  keylocation           none                      default
ZFS-SSD/vm-102-disk-0  keyformat             none                      default
ZFS-SSD/vm-102-disk-0  pbkdf2iters           0                         default
ZFS-SSD/vm-102-disk-0  snapshots_changed     Fri Aug 29  8:13:15 2025  -
 
Doesnt even look too bad. One more thing you need to aware of, is that `zpool` will show you raw storage and `zfs` usable. As in, IIUC, you have 3x 480G SSDs in that raidz1 pool.
The overall Used + AVAIL in the `zfs list` ouput for the pool itself, is around ~860G.

With 1 disk parity in mind that looks okay. It would also be possible to set the number of copies within the pool itself to a higher number than the default 1, which would lead to less usable space of course.

I am curious, what is the ashift of the pool? It defines the block size used on the physical disks:
Code:
zpool get ashift ZFS-SSD
 
That makes sense. My main concern is that I’ve observed the usage percentage on this pool slowly creeping up over time, and I’m worried about hitting the 80% threshold where performance can start to degrade.


Here is the ashift value as requested:

Code:
root@zeus:~# zpool get ashift ZFS-SSD
NAME     PROPERTY  VALUE   SOURCE
ZFS-SSD  ashift    12      local
 

Attachments

  • Captura de pantalla 2025-08-29 a la(s) 9.32.39 a. m..png
    Captura de pantalla 2025-08-29 a la(s) 9.32.39 a. m..png
    61.8 KB · Views: 4
and I’m worried about hitting the 80% threshold where performance can start to degrade.
Keep in mind that tihs stems from a time where all we had were HDDs. Given that ZFS is copy on write, the data will fragment over time, and if the HDD is full, it will need more time to find unused space on the disk. With SSDs where the seek time is practically zero, I do not think that the 80% rule of thumb is as important anymore. You still want some free space left and not fill up the pool completely!
 
Last edited:
Thanks, that clarification helps a lot. I wasn’t aware the 80% guideline was mostly relevant for spinning disks. Good to know that with SSDs the impact is smaller, but I’ll still make sure to keep enough free space and avoid filling the pool completely.