ZFS pool high allocated space

kevindd992002

Member
Dec 20, 2023
59
2
8
I have a ZFS raid0 single-disk pool for my pve host. I only have one VM in it with these disks:

1735800407271.png

The vDisk is configured to be only 64GB and I have a couple of snapshots that I expect are lightweight.

When I check the pve's zfs pool size, I get this:

1735800452626.png
Why am I seeing 241.52GB of allocated space? How do I trace what's causing this?
 
root@pve:~# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
rpool/data/vm-101-disk-0@Snapshot_08292024_1031PM 128K - 176K -
rpool/data/vm-101-disk-0@Config_restored 128K - 200K -
rpool/data/vm-101-disk-0@Snapshot_10292024_850PM 152K - 224K -
rpool/data/vm-101-disk-0@Snapshot_12212024_1116AM 128K - 136K -
rpool/data/vm-101-disk-0@Snapshot_12212024_1126AM 128K - 144K -
rpool/data/vm-101-disk-1@Snapshot_08292024_1031PM 11.6M - 22.0G -
rpool/data/vm-101-disk-1@Config_restored 1.77G - 24.6G -
rpool/data/vm-101-disk-1@Snapshot_10292024_850PM 54.5G - 97.3G -
rpool/data/vm-101-disk-1@Snapshot_12212024_1116AM 3.74G - 91.5G -
rpool/data/vm-101-disk-1@Snapshot_12212024_1126AM 1.47G - 91.6G -
 
So I see there are snapshots, yes most are lightweight but one is 54.5G.

Let us look at everything with:
Code:
zfs list -o space,lused,refer,lrefer,compressratio

## PLEASE POST OUTPUT IN CODE-EDITOR ONLY - LIKE I HAVE DONE ##
## CHOOSE THE CODE-EDITOR IN THE REPLY FORMATTING BAR MARKED "</>" ##
## THEN POST THE OUTPUT THERE & PRESS CONTINUE ##
 
So I see there are snapshots, yes most are lightweight but one is 54.5G.

Let us look at everything with:
Code:
zfs list -o space,lused,refer,lrefer,compressratio

## PLEASE POST OUTPUT IN CODE-EDITOR ONLY - LIKE I HAVE DONE ##
## CHOOSE THE CODE-EDITOR IN THE REPLY FORMATTING BAR MARKED "</>" ##
## THEN POST THE OUTPUT THERE & PRESS CONTINUE ##

Code:
NAME                      AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  LUSED  REFER  LREFER  RATIO
rpool                     3.85G   225G        0B    124K             0B       225G   139G   124K     42K  1.24x
rpool/ROOT                3.85G  7.26G        0B    128K             0B      7.26G  6.20G   128K     42K  1.75x
rpool/ROOT/pve-1          3.85G  7.26G        0B   7.26G             0B         0B  6.20G  7.26G   6.20G  1.75x
rpool/data                3.85G   216G        0B    124K             0B       216G   131G   124K     42K  1.22x
rpool/data/vm-101-disk-0  3.85G   904K      736K    168K             0B         0B  1.56M   168K    560K  4.49x
rpool/data/vm-101-disk-1  3.85G   216G      121G   94.2G             0B         0B   131G  94.2G   56.3G  1.22x
rpool/var-lib-vz          3.85G  1.72G        0B   1.72G             0B         0B  2.02G  1.72G   2.02G  2.35x
 
So it appears that:
Code:
NAME                        AVAIL    USED    USEDSNAP    USEDDS
rpool/data/vm-101-disk-1    3.85G   216G    121G        94.2G
This means that the USEDDS (actual size of the files in the dataset) is 94.2G & the USEDSNAP (space used by the snapshots) is 121G, giving approx. total of 216G.

I don't use ZFS, but I'm guessing that if you want to free up space - you will probably have to trim that disk & also remove those snapshots.
 
Trim the disk from inside the guest VM OS, correct?

Also, what comprises of the approx. total of 216G?
 
Last edited:
Just tried to give you info so you could update yourself on ZFS's own trimming/scrubbing mechanism. If you don't appreciate the help I won't get offended.
No, no, no. Please don't get me wrong. I appreciate the help. I guess I should've used "for what purpose". This is the issue with text-based conversations, lol
 
Hello,

What filesystem is used by the virtual machine/container's guest OS? Some filesystems e.g. ReFS or btrfs are copy-on-write and require extra space when the disk image lives in a ZFS filesystem.
 
In general using a COW filesystem (like ZFS) on a disk image which is stored in a COW filesystem (also known as cow on top of cow) can lead to write amplification. See [1] for example.

[1] https://forum.proxmox.com/threads/understanding-and-minimizing-zfs-write-amplification.83096/
I expected write amplification but is it really that big? I started with a UFS opnsense installation but didn't go well. I had to switch to ZFS after consulting a handful of people about the same concern (cow on top of cow) and the conclusion was to go for it.