thin pools running out of space, but i only used 17%

jcd420

New Member
Jul 7, 2025
9
0
1
Sorry i don't really know much how to use Proxmox. One day I created a VM and when I deleted the VM I did not delete its drive. Now I get this error.

Sum of all thin volume sizes (<599.97 GiB) exceeds the size of thin pool pve/data and the size of whole volume group (<475.94 GiB).
snapshotting 'drive-efidisk0' (local-lvm:vm-100-disk-0)


I only have 1 VM of 102GB and my HD is 500GB. How can I find and delete this lost space? Can you tell me the command lines? I have very little experience with Proxmox.
 
Please share
Bash:
qm rescan
lvs -a
qm config 100 --current
 
Last edited:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- <348.82g 21.68 1.54
[data_tdata] pve Twi-ao---- <348.82g
[data_tmeta] pve ewi-ao---- <3.56g
[lvol0_pmspare] pve ewi------- <3.56g
root pve -wi-ao---- 96.00g
snap_vm-100-disk-0_up25 pve Vri---tz-k 4.00m data vm-100-disk-0
snap_vm-100-disk-0_up7 pve Vri---tz-k 4.00m data vm-100-disk-0
snap_vm-100-disk-1_up25 pve Vri---tz-k 102.00g data vm-100-disk-1
snap_vm-100-disk-1_up7 pve Vri---tz-k 102.00g data vm-100-disk-1
swap pve -wi-ao---- 8.00g
vm-100-disk-0 pve Vwi-aotz-- 4.00m data 14.06
vm-100-disk-1 pve Vwi-aotz-- 102.00g data 42.41
vm-100-state-up25 pve Vwi-a-tz-- <24.49g data 33.32
vm-100-state-up7 pve Vwi-a-tz-- <24.49g data 48.20

root@promox:~# qm config 100 --current
agent: 1
bios: ovmf
boot: order=scsi0
cores: 4
cpu: host
description: # Home Assistant OS%0A### https%3A//github.com/tteck/Proxmox%0A[![ko-fi](https%3A//ko-fi.com/img/githubbutton_sm.svg)](https%3A//ko-fi.com/D1D7EP4GF)
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,size=4M
localtime: 1
memory: 12288
meta: creation-qemu=8.0.2,ctime=1696341914
name: ha
net0: virtio=e4:5f:01:90:03:af,bridge=vmbr0
onboot: 1
ostype: l26
parent: up7
scsi0: local-lvm:vm-100-disk-1,cache=writethrough,discard=on,size=102G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=935dae02-982b-477a-8981-021e9baaa45c
tablet: 0
tags: proxmox-helper-scripts
usb0: host=1a86:7523
vmgenid: ed7813d9-8d87-46c3-91ef-a5c9bb778131
 
Please use code blocks so the formatting is preserved. Try to delete some of your snapshots. You should not keep them around too long.
Note that this is just a warning and you're not actually running out of space. Check the %Data column.
The space is thin provisioned. It's just that if the disks were all full they would take more space than you have. Also see this and this.
 
Last edited:
I only have 2 copy of a good snapshots. and 2 bad ones that I cant delete. how do I delete them?

1751907248557.png
 
Code:
TASK ERROR: lvremove snapshot 'pve/snap_vm-100-disk-1_upBad' error:   Failed to find logical volume "pve/snap_vm-100-disk-1_upBad"
 
Not sure how that happened. You can remove that snapshot entry manually in the 100.conf config file in /etc/pve.
 
  • Like
Reactions: jcd420
Code:
root@promox:~# lvs -a
  LV                      VG  Attr       LSize    Pool Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  data                    pve twi-aotz-- <348.82g                    21.90  1.57                           
  [data_tdata]            pve Twi-ao---- <348.82g                                                           
  [data_tmeta]            pve ewi-ao----   <3.56g                                                           
  [lvol0_pmspare]         pve ewi-------   <3.56g                                                           
  root                    pve -wi-ao----   96.00g                                                           
  snap_vm-100-disk-0_up25 pve Vri---tz-k    4.00m data vm-100-disk-0                                       
  snap_vm-100-disk-0_up7  pve Vri---tz-k    4.00m data vm-100-disk-0                                       
  snap_vm-100-disk-1_up25 pve Vri---tz-k  102.00g data vm-100-disk-1                                       
  snap_vm-100-disk-1_up7  pve Vri---tz-k  102.00g data vm-100-disk-1                                       
  swap                    pve -wi-ao----    8.00g                                                           
  vm-100-disk-0           pve Vwi-aotz--    4.00m data               14.06                                 
  vm-100-disk-1           pve Vwi-aotz--  102.00g data               42.51                                 
  vm-100-state-up25       pve Vwi-a-tz--  <24.49g data               33.32                                 
  vm-100-state-up7        pve Vwi-a-tz--  <24.49g data               48.20

Code:
root@promox:~# qm config 100 --current
agent: 1
bios: ovmf
boot: order=scsi0
cores: 4
cpu: host
description: # Home Assistant OS%0A### https%3A//github.com/tteck/Proxmox%0A[![ko-fi](https%3A//ko-fi.com/img/githubbutton_sm.svg)](https%3A//ko-fi.com/D1D7EP4GF)
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,size=4M
localtime: 1
memory: 12288
meta: creation-qemu=8.0.2,ctime=1696341914
name: ha
net0: virtio=e4:5f:01:90:03:af,bridge=vmbr0
onboot: 1
ostype: l26
parent: up7
scsi0: local-lvm:vm-100-disk-1,cache=writethrough,discard=on,size=102G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=935dae02-982b-477a-8981-021e9baaa45c
tablet: 0
tags: proxmox-helper-scripts
usb0: host=1a86:7523
vmgenid: ed7813d9-8d87-46c3-91ef-a5c9bb778131
 
Maybe this info helps?
why it says 350gb? When I have 500gb total

1751983400689.png

but I have only 109gb reserved

1751983482934.png
 
darn. I think that's the issue. I have a 100gb VM and did a 250gb VM and when I deleted the 250gb VM. I did not delete the drive. so now it reserved 350gb. but im only using 100gb.
 
No. The size if data is fixed. See your lvs output. I encourage you to read the LVM section in the PVE docs and read about LVM in general. The Arch Wiki is a good place for the latter.
 
im sorry. I don't understand the docs. there to technical for me. So my best option is todo a backup and format the hard drive? So I can recover the lost 250gb? I did not want to assigned all just 100gb. As it was before, I created the 2nd VM 101. That I deleted but, the HD is still reserved

1751988421859.png
 
You are looking at the wrong thing. Look at LVM-Thin below. Most of the space is allocated to data but data itself does not currently use all of it. See Data% column. What you see is normal. There is nothing wrong or to fix here. It just means the space in the volume group is 97% assigned to volumes. Think of it like a partition. Just because it's 350G large does not mean it's full. I can't really explain it better than the docs and the terms like volume group above will mean nothing without reading them. If you have a specific question I can answer it but you need to learn the fundamentals yourself.
 
Last edited: