local-lvm Disk Usage does not match actual usage

anson

Member
Sep 28, 2016
15
0
21
Hi,

I am facing some issues regarding the disk usage of local-lvm (thin LVM). The current disk usage (800GB+) in lvm showing in Proxmox is 200% more than the actual disk usage of the VM (400+gb).

There is only 1 VM on the Proxmox node. I have tried using fstrim within the VM but the disk usage in LVM does not go down.

Fact:
The VM disk size is configured as 1000gb while the physical disk is 960GB.

May I know what is the cause? Is this related to the "Meta"?

Code:
root@pve:~# lvs
  LV            VG   Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve  twi-aotz--  861.96g             97.24  40.38
  root          pve  -wi-ao----   96.00g
  swap          pve  -wi-ao----    4.00g
  vm-123-disk-1 pve  Vwi-aotz-- 1000.00g data        83.82

Code:
root@pve:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                BTGmoU-B4W6-mFpL-MWty-5d2H-sjPt-wQGOns
  LV Write Access        read/write
  LV Creation host, time proxmox, 2016-11-02 01:17:53 +0800
  LV Status              available
  # open                 2
  LV Size                4.00 GiB
  Current LE             1024
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:1

  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                5JCJuy-FIC0-fq1h-6zEV-Z5U2-Iyb2-HHQqXp
  LV Write Access        read/write
  LV Creation host, time proxmox, 2016-11-02 01:17:53 +0800
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:0

  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                uivrxj-QRr1-2qTF-sC0C-eGfG-w2ux-DgxEaP
  LV Write Access        read/write
  LV Creation host, time proxmox, 2016-11-02 01:17:53 +0800
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 2
  LV Size                861.96 GiB
  Allocated pool data    97.24%
  Allocated metadata     40.38%
  Current LE             220663
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:4

  --- Logical volume ---
  LV Path                /dev/pve/vm-123-disk-1
  LV Name                vm-123-disk-1
  VG Name                pve
  LV UUID                w7rJjG-dhXY-Vbck-AQsf-qskA-RR8f-YcauvU
  LV Write Access        read/write
  LV Creation host, time pve, 2017-01-30 07:53:05 +0800
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                1000.00 GiB
  Mapped size            83.82%
  Current LE             256000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:6
 
Last edited:
Hi,
it's dangerous to give more hdd-space to clients than you have in real live...

If you habe 960GB and create an 1000GB vm-hdd you run in trouble if you use more than 959GB inside the VM.
You must expand your Data-LV before.

Udo
 
Hi,
it's dangerous to give more hdd-space to clients than you have in real live...

If you habe 960GB and create an 1000GB vm-hdd you run in trouble if you use more than 959GB inside the VM.
You must expand your Data-LV before.

Udo

Hi Udo, spot on. This was a temporary Proxmox node for disaster recovery and I am trying to move this to a local ZFS based storage, followed by using pve-zsync to complete the "move out" to production node with minimal downtime. However, I am now experiencing issue using 'Move Disk' function, and I am not sure if this is due to the 'large' LVM, where in fact, VM disk usage (400GB+) is not that high.
 
Have you considered running fstrim -av inside the vm? This should mark deleted blocks as free again, making the vmdisk image as large as it really is. I have fstrim run each night before backing up the vm itself to remote storage to only have the disk images as large as necessary.

Give it a try.

/ Sorry,
I had missed the line you telling you already did this.. sorry!