[SOLVED] Strange ZFS space utilization after PBS restore of a VM

May 13, 2020
16
4
23
45
Hi,

I must be going crazy, so after setting up new system and restoring all my VMs on it from PBS I noticed the following utilization disparity:

New system:

Code:
zfs list
NAME                                    USED  AVAIL     REFER  MOUNTPOINT
rpool                                   615G   307G      104K  /rpool
rpool/ROOT                             2.29G   307G       96K  /rpool/ROOT
rpool/ROOT/pve-1                       2.29G   307G     2.29G  /
rpool/data                               96K   307G       96K  /rpool/data
rpool/data-encrypted                    613G   307G      204K  /rpool/data-encrypted
rpool/data-encrypted/base-2000-disk-0  43.1G   348G     1.87G  -
rpool/data-encrypted/base-2001-disk-0  30.6G   324G     14.1G  -
rpool/data-encrypted/base-2001-disk-1  8.19G   311G     4.06G  -
rpool/data-encrypted/vm-200-disk-0     5.16G   312G     41.9M  -
rpool/data-encrypted/vm-201-disk-0     41.3G   342G     5.99G  -
rpool/data-encrypted/vm-202-disk-0     41.3G   342G     6.56G  -
rpool/data-encrypted/vm-202-disk-1     51.6G   338G     21.1G  -
rpool/data-encrypted/vm-203-disk-0     41.3G   335G     13.3G  -
rpool/data-encrypted/vm-203-disk-1      103G   397G     13.1G  -
rpool/data-encrypted/vm-204-disk-0     51.6G   329G     29.9G  -
rpool/data-encrypted/vm-204-disk-1     72.2G   370G     8.96G  -
rpool/data-encrypted/vm-205-disk-0     72.2G   340G     39.7G  -
rpool/data-encrypted/vm-205-disk-1     51.6G   359G     13.3M  -

zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool   952G   161G   791G        -         -     0%    16%  1.00x    ONLINE  -

zpool status
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:02 with 0 errors on Sun Oct  8 00:24:03 2023
config:

    NAME           STATE     READ WRITE CKSUM
    rpool          ONLINE       0     0     0
      mirror-0     ONLINE       0     0     0
        nvme0n1p3  ONLINE       0     0     0
        nvme1n1p3  ONLINE       0     0     0

errors: No known data errors

So zpool reports 791GB free but the PVE UI and zfs list reports only 307. This is a simple mirror and all of the VMs have discard=on and I just ran fstrim -av on all of them.

Here is the output from the Old system:

Code:
zfs list
NAME                                    USED  AVAIL     REFER  MOUNTPOINT
rpool                                   153G  1.61T       96K  /rpool
rpool/ROOT                             5.65G  1.61T       96K  /rpool/ROOT
rpool/ROOT/pve-1                       5.65G  1.61T     5.65G  /
rpool/data                               96K  1.61T       96K  /rpool/data
rpool/data-encrypted                    147G  1.61T      200K  /rpool/data-encrypted
rpool/data-encrypted/backup-keys        196K  1.61T      196K  /rpool/data-encrypted/backup-keys
rpool/data-encrypted/base-2000-disk-0  1.57G  1.61T     1.57G  -
rpool/data-encrypted/base-2001-disk-0  12.9G  1.61T     12.9G  -
rpool/data-encrypted/base-2001-disk-1  4.03G  1.61T     4.03G  -
rpool/data-encrypted/vm-200-disk-0     37.7G  1.61T     37.7G  -
rpool/data-encrypted/vm-200-disk-2     7.08M  1.61T     7.08M  -
rpool/data-encrypted/vm-201-disk-0     28.3G  1.61T     28.3G  -
rpool/data-encrypted/vm-201-disk-1     5.91G  1.61T     5.91G  -
rpool/data-encrypted/vm-202-disk-0     12.9G  1.61T     12.9G  -
rpool/data-encrypted/vm-202-disk-1     12.9G  1.61T     12.9G  -
rpool/data-encrypted/vm-204-disk-0     6.29G  1.61T     6.29G  -
rpool/data-encrypted/vm-204-disk-1     18.4G  1.61T     18.4G  -
rpool/data-encrypted/vm-205-disk-0     5.76G  1.61T     5.76G  -

zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  1.81T   153G  1.66T        -         -    19%     8%  1.00x    ONLINE  -

Any idea how to reclaim the space?

Thanks!