Thinpool usage VS actual disk usage inside VM

tchjntr

New Member
Oct 1, 2024
1
0
1
Hi there, I'm new to the Proxmox forums.

First of all, thanks for developing Proxmox and let us experiment with this incredible piece of software. I am still in the "trial and error" phase but I am enjoying every second of it. I have searched the forums but maybe I was not using the correct keywords so I am posting this new thread.

I have a Debian 12 VM set up with two disks:

Disk 1: 32GB for the root filesystem created in "local-lvm" storage (LVM-Thin) on the OS drive which is a 2TB WD SA500 M.2 SATA SSD.
Disk 2: 2TB for the /home partition created in "timechain" storage (LVM-Thin) on a second drive which is a 2TB WD SA500 2.5" SATA SSD.

Both disks have SSD Emulation and Discard enabled.

I have noticed that there is a substantial discrepancy between the usage reported in the "timechain" thinpool and the actual usage of the /home folder in the VM.

If I run df -h inside the VM, I get this:

Code:
/dev/sda1        32G  2.5G   28G   9% /
/dev/sdb1       1.8T  633G  1.1T  37% /home

In the Proxmox web UI instead, I see that the usage for the "local-lvm" thinpool is 3.49GB (screenshot below for reference).

Screenshot 2024-10-01 at 21.52.22.png

And the usage for the "timechain" thinpool is almost 720GB (screenshot below for reference).

Screenshot 2024-10-01 at 21.47.14.png

There is most probably something I have misconfigured but I can't figure out what it is. I appreciate any input.
 
LVM thin uses AFAIK extents of 2 MB, which is the smallest possible value that can be reclaimed in terms of thin provisioning. If you have only written one 4K block on a filesystem that uses one block, you will waste the rest of the block so that your LVM thin pool is filled more than your actual disk inside.

If you want to reclaim as much as possible, you need to rearrange the data on your disk regularly. This can be done for Linux by resizing your ext4 disk to its minimum via resize2fs /dev/sdb1 1, read the error message and use the minimum displayed value as the new minimum. Afterwards resize the disk without an argument so that the whole space will be available afterwards. Then fstrim the filesystem and you'll have more free (probably still not mached with the usage you see in the VM).

This is high level space optimization and is a general problem with hypervisors and not specifiy to PVE. However, you can reduce this by using other storage systems as your backing storage:
  • ZFS has 8K or 16K default blocksize (also configurable)
  • LVM thin as AFAIK 2 MB
  • CEPH as 4 MB
other storage systems do not offer thin provisioning without another layer of compacting (e.g. qcow2).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!