proxmox local-lvm is almost full even though the original is not yet full

Yehezkiel

New Member
Jul 10, 2024
2
0
1
Hi everyone, I have a problem and curiosity about local lvm on my proxmox machine. On the Proxmox machine I only have 1 VM with the ID name VM100 and the disk capacity that I allocated to that VM is 1.5TB.

Currently the VM has reached 215GB on disk in used, and this VM has 2 snapshots (not including RAM).

then my question is why in local-lvm proxmox can reach 1.27TB from 1.37TB (92.11%)? even though the disk used by the VM is only 215GB and plus 2 snapshoots, the logic should be 215GB(VM) + 215GB(snapshoot) + 215GB(snapshoot) = 645GB

however local-lvm on it hit 1.27TB of 1.37TB (92.11%).

I ask for your help, thank you.


CAPTURE and INFORMATION DISK in VM100 (Shell) :
1720580761143.png

=================================================================================

CAPTURE and INFORMATION DISK in PROXMOX MACHINE (Shell and GUI)
1720580561296.png
1720580663789.png

1720581316122.png

===============================================================
root@xxx:~# lvdisplay
--- Logical volume ---
LV Name data
VG Name pve
LV UUID eyWK1K-EOpu-Zf83-ezqV-esW9-Rviz-MUSoTS
LV Write Access read/write (activated read only)
LV Creation host, time proxmox, 2024-02-12 11:59:05 +0700
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 0
LV Size <1.25 TiB
Allocated pool data 92.11%
Allocated metadata 2.70%
Current LE 327617
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:5

--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID jX1eBe-epHv-b0B4-m2u2-u6eL-KRjG-pXDVuS
LV Write Access read/write
LV Creation host, time proxmox, 2024-02-12 11:58:49 +0700
LV Status available
# open 2
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID af7NHe-UTKH-zUi2-P138-4HzG-RNfI-fx2TwV
LV Write Access read/write
LV Creation host, time proxmox, 2024-02-12 11:58:49 +0700
LV Status available
# open 1
LV Size 425.62 GiB
Current LE 108959
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:1

--- Logical volume ---
LV Path /dev/pve/snap_vm-100-disk-0_OS_Ubuntu_ready_for_xxxxxxx
LV Name snap_vm-100-disk-0_OS_Ubuntu_ready_for_xxxxxxxx
VG Name pve
LV UUID eiOw7k-4Z0p-9omf-qf1t-AjNg-4rBM-ZQsgND
LV Write Access read only
LV Creation host, time proxmox-hpe-ap, 2024-02-25 22:38:46 +0700
LV Pool name data
LV Status NOT available
LV Size 1.36 TiB
Current LE 357628
Segments 1
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Path /dev/pve/vm-100-disk-0
LV Name vm-100-disk-0
VG Name pve
LV UUID 6Ddri7-DSat-8Mxy-5SeA-V1pj-NYs7-KYHIYi
LV Write Access read/write
LV Creation host, time proxmox-hpe-ap, 2024-04-22 02:23:31 +0700
LV Pool name data
LV Status available
# open 1
LV Size 1.47 TiB
Mapped size 56.37%
Current LE 385788
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:6

--- Logical volume ---
LV Path /dev/pve/snap_vm-100-disk-0_Check1
LV Name snap_vm-100-disk-0_Check1
VG Name pve
LV UUID PEl5qB-p40m-Vn2c-9nIE-DPFZ-al3B-VDhi5J
LV Write Access read only
LV Creation host, time proxmox-hpe-ap, 2024-06-19 12:42:46 +0700
LV Pool name data
LV Thin origin name vm-100-disk-0
LV Status NOT available
LV Size 1.46 TiB
Current LE 383228
Segments 1
Allocation inherit
Read ahead sectors auto
============================================================
root@xxx:~# pvdisplay
--- Physical volume ---
PV Name /dev/sda3
VG Name pve
PV Size 1.74 TiB / not usable 2.98 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 457343
Free PE 12033
Allocated PE 445310
PV UUID XH6K9q-OdnV-JO0p-Q4aJ-yDrQ-8UGT-edmyLS
==========================================================
 

Attachments

  • 1720581174091.png
    1720581174091.png
    64.4 KB · Views: 3
then my question is why in local-lvm proxmox can reach 1.27TB from 1.37TB (92.11%)? even though the disk used by the VM is only 215GB and plus 2 snapshoots, the logic should be 215GB(VM) + 215GB(snapshoot) + 215GB(snapshoot) = 645GB
It does not matter what the actual disk usage is at the moment you look, if you already filled up the disk once, the data on the disk is still there and marked as deleted, yet not empty (as in zero). This is the norm with any storage system. You need to trim the free space, so that deleted files are actually marked as deleted and the underlying storage backend can deallocate it. Snapshots are immutable, so that you cannot free up the space used in the snapshot, just at the current state of your machine.
 
Two things to note, one of them is specific to your situation & one is general.

Specific to your situation:

You have 3 LVs for the virtual disk of VM-100;
1. vm-100-disk-0 (actual current disk)
2. snap_vm-100-disk-0_OS_Ubuntu_ready_for_xxxxxxxx (snapshot 1)
3. snap_vm-100-disk-0_Check1 (snapshot 2)

However looking at the LV details of the snapshots (1 & 2 above), you will notice the following, snapshot 2 contains info: LV Thin origin name vm-100-disk-0, this is consistent with correct snapshot / storage allocation, however snapshot 1 contains no such info. Furthermore, look at the creation dates for all three LVs, actual current disk was created on LV Creation host, time proxmox-hpe-ap, 2024-04-22 02:23:31 +0700, and subsequently snapshot 2 on LV Creation host, time proxmox-hpe-ap, 2024-06-19 12:42:46 +0700, so perfectly consistent with that snapshot 2 being created post the original actual current disk. But now lets look at snapshot 1, its creation date is LV Creation host, time proxmox-hpe-ap, 2024-02-25 22:38:46 +0700, so that is actually prior to the creation date for the actual current disk. All this leads me to believe, that either a restore from snapshot was committed (on 2024-04-22 ?) or somehow you used an (old) snapshot to actually create the VMs original virtual disk.

Another point (general non-specific), AFAIK as long as there is a snapshot in place for a VM no deleted data (within the VM) will in fact actually be deleted (to show free space) because that "freed up space" is going to be held back by the snapshot, this is required so that the snapshot can correctly correlate data changes in the volume.

Both of the above explain why your logic of "215GB(VM) + 215GB(snapshoot) + 215GB(snapshoot) = 645GB" does not hold.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!