Volume with ZPOOL CAP at 96 % and won't reduce, VM internal space reduced.

Blackpuck86

New Member
Jun 30, 2025
4
0
1
I would greatly appreciate help
I've been searching for many many hours and attempting many different solutions but,

I have a PVE Openmediavault virtual machine with 8TB drive allocated to storage that I can't seem to unfill/free space for Proxmox

The first time I ran into this, the disk also filled up to 100 percent (even though I deleted files inside the VM) and prevented the VM from starting up until it was disconnected. I'm afraid the same thing will happen to this disk too. The virtual machine is on a separate disk and has plenty of space.
Here's the relevant numbers.
zpool list

NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
WD-80-8TB7.27T 7.03T 240G - - 12% 96% 1.00x ONLINE -
zfs list

NAME USED AVAIL REFER MOUNTPOINT
WD-80-8TB 7.14T 299M 96K /WD-80EFZZ-8TB
WD-80-8TB/vm-102-disk-0 7.14T 112G 7.03T -

From inside PVE


EnabledYes
ActiveYes
ContentDisk image, Container
TypeZFS
Usage100.00% (7.85 TB of 7.85 TB


scan: scrub repaired 0B in 18:19:17 with 0 errors on Mon Jun 30 06:59:56 2025
config:


NAME STATE READ WRITE CKSUM
WD-80-8TB ONLINE 0 0 0
ata-WDC_WD80xxxx-xxxxxxx_WD-xxxxxxxx ONLINE 0 0 0
insde the VM

df -h
/dev/sdc1 7.0T 5.8T 1.3T 83% /srv/dev-disk-by-uuid-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxc73d

Inside the vm, the space was showing 95 percent, now it's down to 83% but the ZFS or ZPOOL numbers haven't changed.

There are no snapshots to delete or backups on the drive & the VM and PVE has restarted a number of times without any allocation numbers changing.

It's a conventional drive. I tried TRIM and AUTOTRIM to no avail

The volume is only being used for Openmediavault storage.

I'm a long time Windows technician, but this is the first 6 months of being immersed into Linux
Help would be greatly appreciated. This is the first time I've attempted to use a forum for help, but I've spent so many hours searching for answers.
TIA
 
Are you taking any snapshots? What filesystem are you using and is it ‘trimmable’ (just removing files doesn’t mean you can actually zero sufficient blocks to reduce disk space). Is your VM disk image larger than the underlying capacity? Perhaps you can shrink it to a ‘safer’ size (90-95% if it’s the only thing).
 
Thanks for your reply...
Nope, not taking any snapshots.
I created a ext4 volume under Openmedia vault, (the PVE volume created is ZFS)
I tried using TRIM but PVE said it wasn't trimable. As I understand it is that only SSD's are trimmable.
As to the VM disk image, as I understand it, the VM disk image is smaller. I cannot find any place to make it smaller, via GUI or terminal though I've looked
thx
 
Please share this from the node side
Bash:
cat /etc/pve/storage.cfg
zfs list -rt all -o name,used,avail,refer,mountpoint,refquota,refreservation
qm config 102 --current
Trim is also important for thin allocated storage no matter the physical storage. How/where did you execute trim?
 
Last edited:
Thank you for your reply, here are the commands executed
cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content vztmpl,iso,backup

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

zfspool: WD-80-8TB
pool WD-80-8TB
content images,rootdir
mountpoint /WD-80-8TB
nodes proxmox


zfs list -rt all -o name,used,avail,refer,mountpoint,refquota,refreservation


NAME USED AVAIL REFER MOUNTPOINT REFQUOTA REFRESERV
WD-80-8TB 7.14T 299M 96K /WD-80-8TB none none
WD-80-8TB/vm-102-disk-0 7.14T 112G 7.03T - - 7.14T

root@proxmox:~# qm config 102 --current
balloon: 1000
boot: order=scsi0;net0
cores: 2
cpu: x86-64-v2-AES
description: Tosh
memory: 3500
meta: creation-qemu=9.0.2,ctime=1732463477
name: OMV
net0: virtio=B2:23:10:16:BA:3C,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-102-disk-0,iothread=1,size=10G
Scsi2: WD-80-8TB:vm-102-disk-0,backup=0,iothread=1,size=7199G
scsihw: virtio-scsi-single
smbios1: uuid=f25c3ee1-448A-416c-a2d4-8Ebd6651552
sockets: 1
usb0: host=1058:25e2
vmgenid: 9a654c5c-b03e-4125-8a61-ca5648afdbfb

on the node I executed : zpool trim WD-80-8TB
and it replied "cannot trim: no devices in pool support trim operations"

I also did

zpool get autotrim WD-80EFZZ-8TB and got

NAME PROPERTY VALUE SOURCE
WD-80-8TB autotrim on local

I did
fstrim -v / - and got
/: 52.8 GiB (56700821504 bytes) trimmed

In the VM i did the fstrim -v / 2x and got
/: 5.5 GiB (5880098816 bytes) trimmed and
/: 177.6 MiB (186187776 bytes) trimmed

I'm not sure if this is helpful so far

TIA
 
Last edited:
Thanks for that info. Much progress!

Turning on "Discard" for the VM's Disk then running "fstrim -av" inside the VM adjusted everything properly down to 80%.

I did the same thing on a 4TB disk that was full and inaccessible, but I had to do

"echo 6 > /sys/module/zfs/parameters/spa_slop_shift" in the node to give it a bit of space

I found that trick here <https://forum.proxmox.com/threads/p...ot-become-completely-full.166768/#post-774198>
Now that disk went from 96% full to 57%, I was able to move it back to the other VM and access the files !

My thinking on not doing Thin Provisioning for is that it is faster for access, Is that true?

(Btw, I didn't find anything about sparse or any references to storage.cfg. but everything is back in order.
I'll look for "best practices" to see if it's a good idea to have "discard" turn on most volumes
Thanks again!
 
it is faster for access, Is that true?
I don't think it is in this case but I haven't tested it.

I didn't find anything about sparse
The sparse text above links to docs about it. It basically just sets refreservation to the size of the disk rather than none. Thin Provisioning comes with risks so not using it with ZFS can be a valid choice. Doing the same in LVM land by using LVM instead of LVM-Thin would lose you snapshot support, for example.
 
Last edited: