[SOLVED] local-lvm usage much higher than LXC bootdisk size

aleksa

New Member
Oct 20, 2021
13
0
1
24
Hi,

I'm unsure where to go from, what info I need to gather, or what to debug as I'm not entierly sure what's going on yet.
Here's all I know so far:

I have an LXC container that has 100GB of assigned storage, and a volume that's slightly under that size.

Taking a look at the LXC container, this is what I see:
Bootdisk size 39.59% (38.75 GiB of 97.87 GiB)

However, when I look at the volume, local-lvm:
Usage 96.48% (82.39 GB of 85.39 GB)

I'm not sure where that other extra usage is coming from, I do have another tiny container on that volume as well, however its limited to 20GB and currently using only 7.52 GiB, however I did try to back it up to local and delete it, to see how much data that would recoup, and it freed up ~15 GiB, so it seems that for whatever reason, the usage in the volume is almost exactly double than the LXC containers are actually using.

Here's the output of `lvs -a`:
Code:
  LV              VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data            pve twi-aotz-- <79.53g             96.48  4.27                           
  [data_tdata]    pve Twi-ao---- <79.53g                                                   
  [data_tmeta]    pve ewi-ao----   1.00g                                                   
  [lvol0_pmspare] pve ewi-------   1.00g                                                   
  root            pve -wi-ao----  12.00g                                                   
  swap            pve -wi-ao----   8.00g                                                   
  vm-100-disk-0   pve Vwi-aotz-- 100.00g data        68.76                                 
  vm-101-disk-0   pve Vwi-aotz--  20.00g data        39.83

Any clues as to what might be going on?
I'm on Proxmox 7.2-4

What can I do/run to get additional information on this?
 
Hi,
it seems that the whole thin pool (i.e. data) itself only has ~80GiB available. One shouldn't let the pool run full, so It's better to not create images larger than the whole pool.
 
> so It's better to not create images larger than the whole pool.

I'm not entierly sure what you mean by this, the VM images?
Yeah, they are larger than the pool, thought they aren't as full as they are reported on there.

That vm-100-disk-0, which is 100GB, for some reason says that its filled up 68.76%, however the actual use when I take a look at the container says this:
Bootdisk size
17.72% (17.35 GiB of 97.87 GiB)

- Even less than before, I deleted some stuff and the local-lvm usage still stayed up, like the files are still there, but they aren't inside of the VM
That's my problem in a nutshell
 
> so It's better to not create images larger than the whole pool.

I'm not entierly sure what you mean by this, the VM images?
Yeah, they are larger than the pool, thought they aren't as full as they are reported on there.

That vm-100-disk-0, which is 100GB,
That space isn't actually there unless you extend the pool it's on.
for some reason says that its filled up 68.76%, however the actual use when I take a look at the container says this:
Bootdisk size
17.72% (17.35 GiB of 97.87 GiB)

- Even less than before, I deleted some stuff and the local-lvm usage still stayed up, like the files are still there, but they aren't inside of the VM
That's my problem in a nutshell
The usage reported by lvs is what's being used from LVMs perspective. The bootdisk size is the usage within the container. You can use pct fstrim 100 on the host, to trim the disk.
 
First time that I ran it, got this:
Code:
pct fstrim 100
/var/lib/lxc/100/rootfs/: 80.5 GiB (86397526016 bytes) trimmed

Tho no space was cleared up on `local-lvm`, tried to run again after a bit, trims about 100MB-2GB depending on how much I wait, but its still, `local-lvm` usage still stays the same

Running it on the other container, 101, also reports it trimmed a bit, but the storage used on `local-lvm` does not clear up.
 
Last edited:
Huh, hold on, lvs -a now shows this though:

Code:
lvs -a
  LV                        VG  Attr       LSize   Pool Origin        Data%  Meta%  Move Log Cpy%Sync Convert
  data                      pve twi-aotz-- <79.53g                    98.00  4.58                         
  [data_tdata]              pve Twi-ao---- <79.53g                                                         
  [data_tmeta]              pve ewi-ao----   1.00g                                                         
  [lvol0_pmspare]           pve ewi-------   1.00g                                                         
  root                      pve -wi-ao----  12.00g                                                         
  snap_vm-100-disk-0_vzdump pve Vri---tz-k 100.00g data vm-100-disk-0                                     
  swap                      pve -wi-ao----   8.00g                                                         
  vm-100-disk-0             pve Vwi-aotz-- 100.00g data               19.76                               
  vm-101-disk-0             pve Vwi-aotz--  20.00g data               39.83

Tho why is now data still stuck at 98%, local-lvm in the UI also displays similar:
1655207487139.png

Edit: I'm stupid, the snapshot from the dump that I tried not long ago still stayed!
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!