Error when creating snapshot

mn124700

Member
Feb 22, 2020
28
0
6
54
While trying to take a snapshot of a Windows 10 VM, I get the following error...

lvcreate snapshot 'satassd/snap_vm-100-disk-0_Win10_initial' error: Cannot create new thin volume, free space in thin pool satassd/data reached threshold.

I'm not very experienced with Proxmox and am not quite sure what to do about this error. Am I running out of memory somewhere? In the web interface, I seem to have sufficient memory.

What do I do to fix this?
Thanks
Eric
 
Not memory. Disk storage probably.

Could you please post the output of the following commands?
Code:
lvs
vgs
pvs
pvesm status
That should give you and us some hints.
You can also see (a part of) that information in the GUI:
  • node -> Disks -> LVM and LVM-Thin and
  • in the summary of your storage in the resource tree on the left side
 
Thanks for the reply. Here are the results...

root@pve:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data pve twi-aotz-- <338.36g 54.04 3.59 root pve -wi-ao---- 96.00g snap_vm-102-disk-0_OMV5_c pve Vri---tz-k 50.00g data vm-102-disk-0 snap_vm-102-disk-0_OMV5_initial pve Vri---tz-k 50.00g data vm-102-disk-0 snap_vm-102-disk-0_OMV_3_24_21 pve Vri---tz-k 50.00g data vm-102-disk-0 snap_vm-102-disk-0_OMV_d pve Vri---tz-k 50.00g data vm-102-disk-0 snap_vm-107-disk-0_PlayOn pve Vri---tz-k 50.00g data vm-107-disk-0 snap_vm-107-disk-0_PlayOn_1 pve Vri---tz-k 50.00g data vm-107-disk-0 snap_vm-107-disk-0_PlayOn_3_24_21 pve Vri---tz-k 50.00g data vm-107-disk-0 snap_vm-107-disk-0_PlayOn_b pve Vri---tz-k 50.00g data vm-107-disk-0 snap_vm-109-disk-0_Plex3Deb pve Vri---tz-k 50.00g data snap_vm-109-disk-0_Plex3Deb_b pve Vri---tz-k 50.00g data snap_vm-109-disk-0_Plex_01 pve Vri---tz-k 50.00g data vm-109-disk-0 snap_vm-109-disk-0_Plex_02 pve Vri---tz-k 50.00g data vm-109-disk-0 snap_vm-109-disk-0_Plex_03 pve Vri---tz-k 50.00g data vm-109-disk-0 snap_vm-109-disk-0_Plex_3_24_21 pve Vri---tz-k 50.00g data vm-109-disk-0 snap_vm-109-disk-0_Plex_p pve Vri---tz-k 50.00g data vm-109-disk-0 swap pve -wi-ao---- 8.00g vm-102-disk-0 pve Vwi-aotz-- 50.00g data 12.84 vm-107-disk-0 pve Vwi-a-tz-- 50.00g data 72.67 vm-109-disk-0 pve Vwi-a-tz-- 50.00g data 90.49 base-111-disk-0 satassd Vri---tz-k 50.00g data base-112-disk-0 satassd Vri---tz-k 50.00g data data satassd twi-aotz-- <1.82t 8.70 100.00 snap_vm-101-disk-0_Mint3_29_21 satassd Vri---tz-k 50.00g data vm-101-disk-0 snap_vm-104-disk-0_Anaconda3_21_21 satassd Vri---tz-k 50.00g data vm-104-disk-0 snap_vm-106-disk-0_Ubuntu3_21_21 satassd Vri---tz-k 50.00g data vm-106-disk-0 snap_vm-113-disk-0_ZM3_21_21 satassd Vri---tz-k 50.00g data vm-113-disk-0 vm-100-disk-0 satassd Vwi-a-tz-- 50.00g data 21.96 vm-101-disk-0 satassd Vwi-a-tz-- 50.00g data 43.96 vm-104-disk-0 satassd Vwi-a-tz-- 50.00g data 44.77 vm-106-disk-0 satassd Vwi-a-tz-- 50.00g data 20.60 vm-113-disk-0 satassd Vwi-a-tz-- 50.00g data 98.47

root@pve:~# vgs VG #PV #LV #SN Attr VSize VFree pve 1 21 0 wz--n- <465.26g <16.00g satassd 1 12 0 wz--n- <1.82t 0

root@pve:~# pvs PV VG Fmt Attr PSize PFree /dev/nvme0n1p3 pve lvm2 a-- <465.26g <16.00g /dev/sda1 satassd lvm2 a-- <1.82t 0

root@pve:~# pvesm status Name Type Status Total Used Available % SATASSD lvmthin active 1953304576 169937498 1783367077 8.70% local dir active 98559220 26574340 66935332 26.96% local-lvm lvmthin active 354791424 191729285 163062138 54.04% vmbackups cifs active 5813268592 1203562140 4609706452 20.70%

I see that the "meta" part of data is at 100%. Is that the issue? Do I need to expand this somehow?

Thanks
Eric
 
That might well be.
Generally you should do something like
Code:
lvextend --poolmetadatasize +1G satassd/data
it looks like your volume group satassd is already full?
 
Thanks for your help. I'm not sure why the volume group says its full. I have plenty of space there and can create new VMs in it. Perhaps I didn't set it up right? When I initialized that SSD, I created it with one large single partition that filled the available space. Should I not have partitioned it? Or was it supposed to have multiple partitions?

When I try to do "lvextend" as suggested, I get an error "Insufficient free space: 256 extents needed, but only 0 available". I'm not sure why I seem to be running out of space. The total size of all the VMs stored on satassd is not nearly enough to fill it.

Also, why does vgs show SATASSD having no free space, while pvesm status shows only 8.7% used?

-Eric
 
Last edited:
Okay, I've discovered the importance of setting that "discard" flag on VMs when using an SSD, which for some reason is off by default. Apparently, I've used up all my space without realizing it, because nothing was ever deleted.

How do I clean up the previous "deleted" files so I can get my space back?

Thanks,
Eric
 
Do you mean the fstrim command?
 
Thanks for the reply. I tried running fstrim -av. It still doesn't open up any free space in satassd. Am I doing something wrong?

- Eric

root@pve:~# fstrim -av /: 195.7 MiB (205225984 bytes) trimmed on /dev/mapper/pve-root root@pve:~# lvextend --poolmetadatasize +1G satassd/data Insufficient free space: 256 extents needed, but only 0 available root@pve:~# vgs VG #PV #LV #SN Attr VSize VFree pve 1 21 0 wz--n- <465.26g <16.00g satassd 1 11 0 wz--n- <1.82t 0 root@pve:~# pvs PV VG Fmt Attr PSize PFree /dev/nvme0n1p3 pve lvm2 a-- <465.26g <16.00g /dev/sdc1 satassd lvm2 a-- <1.82t 0
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!