Reclaim VM ext4 disk space on thin zvol

mathx

Renowned Member
Jan 15, 2014
186
5
83
Looked around for an answer to this but didnt quite find (except fstrim).

I'm thinking that a VM's ext4fs ate a lot of disk, files subsequently deleted and the zvol remains large, despite being thin provisioned.

I didnt have discard on because of oversight. Id like to get the space back.

Inside the VM:

/dev/vda2 984G 375G 559G 41% /

375G used (384,000MiB). A du on the fs shows 413,750MiB used (slight discrepancy with 375/384Gi Im not sure of, but thats not the issue).

Outside the VM:

Rich (BB code):
rpool/data/vm-547-disk-0  logicalused                   1.11T                  -
rpool/data/vm-547-disk-0  used                               769G                   -
rpool/data/vm-547-disk-0  usedbydataset              664G                   -
rpool/data/vm-547-disk-0  usedbyrefreservation  0B                     -
rpool/data/vm-547-disk-0  usedbysnapshots         105G                   -
rpool/data/vm-547-disk-0  compression           on                     inherited from rpool
rpool/data/vm-547-disk-0  compressratio         1.49x                  -
rpool/data/vm-547-disk-0  refcompressratio      1.44x                  -
rpool/data/vm-547-disk-0  refreservation        none                   default

Why is the usedbydataset 664G?

Because of this creation-then-deletion of many files? How can I reclaim that freespace in the zvol? Fstrim inside the host on the fs returns immediately of course.

I am guessing I should have had discard on for the VM, though I read virtio-scsi driver does not support it (and then later read it does..)

Made a clone of the zvol to run tests on before messing with data. fstrim completes immediately and has no effect.

My relevant vm config:

Code:
ostype: l26
scsihw: virtio-scsi-pci
virtio0: local-zfs:vm-547-disk-0,cache=writethrough,size=1000G

Yes I can turn on discard but I figure that only helps me going foward, not for already deleted data.

I think ultimate my problem is that zvol's cant be shrunk and I must just copy the zvol - which i've done, but what's curious is this:

if I copy the zvol with dd if=/dev/zvol/rpool/data/vm-547-disk-0 of=/dev/zvol/rpool/data/code-new-disk then technically it should be the exact same size with the same used-blocks marked.

But this doesnt jive (despite the comrpression diffs)

rpool/data/code-new-disk usedbydataset 250G
rpool/data/code-new-disk usedbysnapshots 0B
rpool/data/code-new-disk compressratio 1.22x
rpool/data/code-new-disk logicalused 304G

Why would a dd from vm-547-0 to new-disk result in less space being used if the blocks are all marked used just the same? And if they're not specifically marked as used, dd doesnt see them? If not then why cant I reclaim that disk space?

Code:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-4-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
pve-kernel-helper: 7.1-12
pve-kernel-5.13: 7.1-7
pve-kernel-5.13.19-4-pve: 5.13.19-9
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 15.2.15-pve1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-6
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-3
libpve-guest-common-perl: 4.1-1
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.1-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-2
proxmox-backup-client: 2.1.5-1
proxmox-backup-file-restore: 2.1.5-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-6
pve-cluster: 7.1-3
pve-container: 4.1-4
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-5
pve-ha-manager: 3.3-3
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.1-2
pve-xtermjs: 4.16.0-1
qemu-server: 7.1-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1