[SOLVED] LVM-Thin (local-lvm) not reclaiming space despite TRIM/discard configuration

marfab

New Member
May 5, 2025
5
1
3

Problem Description​

After deleting files from LXC containers, the LVM-Thin pool (local-lvm) does not reflect freed space. Key details:

  • Configuration:
    • Containers use LVM-Thin storage with discard enabled.
    • issue_discards = 1 in /etc/lvm/lvm.conf.
    • PCT FSTrim reports success, but no space was reclaimed (some space is sometimes reclaimed, but it is not even close to the size of deleted files).
  • Symptoms:
    • lvs shows high data% (approximately 97%) even after deletions.

Relevant Logs/Outputs

Code:
root@host:~# lvs -o lv_name,data_percent,metadata_percent,discards
  LV            Data%  Meta%  Discards
  data          97.73  3.16   passdown
  root                               
  swap                               
  vm-100-disk-0 15.80         passdown
  vm-101-disk-0 12.10         passdown
  vm-102-disk-0 30.90         passdown
  vm-102-disk-1 85.13         passdown
  vm-102-disk-2 3.04          passdown
  vm-103-disk-0 79.64         passdown
  vm-104-disk-0 56.52         passdown
  vm-105-disk-0 47.76         passdown

Code:
root@host:~# lsblk -D /dev/sda 
NAME                         DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
sda                                 0      512B       2G         0
├─sda1                              0      512B       2G         0
├─sda2                              0      512B       2G         0
└─sda3                              0      512B       2G         0
  ├─pve-swap                        0      512B       2G         0
  ├─pve-root                        0      512B       2G         0
  ├─pve-data_tmeta                  0      512B       2G         0
  │ └─pve-data-tpool                0      512B       2G         0
  │   ├─pve-data                    0      512B       2G         0
  │   ├─pve-vm--102--disk--0        0       64K      64M         0
  │   ├─pve-vm--102--disk--1        0       64K      64M         0
  │   ├─pve-vm--102--disk--2        0       64K      64M         0
  │   ├─pve-vm--100--disk--0        0       64K      64M         0
  │   ├─pve-vm--101--disk--0        0       64K      64M         0
  │   ├─pve-vm--103--disk--0        0       64K      64M         0
  │   ├─pve-vm--104--disk--0        0       64K      64M         0
  │   └─pve-vm--105--disk--0        0       64K      64M         0
  └─pve-data_tdata                  0      512B       2G         0
    └─pve-data-tpool                0      512B       2G         0
      ├─pve-data                    0      512B       2G         0
      ├─pve-vm--102--disk--0        0       64K      64M         0
      ├─pve-vm--102--disk--1        0       64K      64M         0
      ├─pve-vm--102--disk--2        0       64K      64M         0
      ├─pve-vm--100--disk--0        0       64K      64M         0
      ├─pve-vm--101--disk--0        0       64K      64M         0
      ├─pve-vm--103--disk--0        0       64K      64M         0
      ├─pve-vm--104--disk--0        0       64K      64M         0
      └─pve-vm--105--disk--0        0       64K      64M         0

Code:
root@host:~# lvs -a -o lv_name,origin
  LV              Origin
  data                 
  [data_tdata]         
  [data_tmeta]         
  [lvol0_pmspare]       
  root                 
  swap                 
  vm-100-disk-0         
  vm-101-disk-0         
  vm-102-disk-0         
  vm-102-disk-1         
  vm-102-disk-2         
  vm-103-disk-0         
  vm-104-disk-0         
  vm-105-disk-0

I tried to fix this on my own using knowledge from this forum and AI help, but I cannot find any way to fix it. My knowledge of Proxmox/Linux is very basic, so if there is some information that I should add to my post, please tell me.
 
Code:
root@host:~# pct config 102
arch: amd64
cores: 2
description: <div align='center'><a href='https%3A//Helper-Scripts.com' target='_blank' rel='noopener noreferrer'><img src='https%3A//raw.githubusercontent.com/tteck/Proxmox/main/misc/images/logo-81x112.png'/></a>%0A%0A  # Cockpit LXC%0A%0A  <a href='https%3A//ko-fi.com/proxmoxhelperscripts'><img src='https%3A//img.shields.io/badge/&#x2615;-Buy me a coffee-blue' /></a>%0A  </div>%0A
dev0: /dev/net/tun
features: keyctl=1,nesting=1
hostname: cockpit
memory: 1024
mp0: local-lvm:vm-102-disk-1,mp=/shareddata,mountoptions=discard,size=900G
mp1: local-lvm:vm-102-disk-2,mp=/docker,mountoptions=discard,size=64G
net0: name=eth0,bridge=vmbr0,hwaddr=BC:24:11:FA:D1:28,ip=dhcp,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-102-disk-0,size=4G,mountoptions=discard
swap: 512
tags: proxmox-helper-scripts
unprivileged: 1
 
That looks okay and should need no pct fstrim. What does pct exec 102 -- df -h say? Please also share lvs without any arguments. I can't even tell how large data is.
 
Keep also in mind that the extend in LVM-thin is AFAIK 2 MB, which is very huge for a filesystem with 4K block sizes. So the whole 2 MB extend has to be empty in order to be reclaimable. CEPH has a similar setup and only ZFS is able to unmap smaller volblocksize of AFAIK 16K for PVE nowadays.
 
Code:
root@host:~# pct exec 102 -- df -h
Filesystem                        Size  Used Avail Use% Mounted on
/dev/mapper/pve-vm--102--disk--0  3.9G  1.1G  2.6G  29% /
/dev/mapper/pve-vm--102--disk--1  885G  751G   91G  90% /shareddata
/dev/mapper/pve-vm--102--disk--2   63G  284M   59G   1% /docker
none                              492K  4.0K  488K   1% /dev
none                              8.0K  4.0K  4.0K  50% /dev/net/tun
udev                              7.8G     0  7.8G   0% /dev/tty
tmpfs                             7.8G     0  7.8G   0% /dev/shm
tmpfs                             3.1G  1.6M  3.1G   1% /run
tmpfs                             5.0M     0  5.0M   0% /run/lock


Code:
root@host:~# lvs
  LV            VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- <816.21g             97.73  3.16                           
  root          pve -wi-ao----   96.00g                                                   
  swap          pve -wi-ao----    8.00g                                                   
  vm-100-disk-0 pve Vwi-aotz--   64.00g data        15.80                                 
  vm-101-disk-0 pve Vwi-aotz--   32.00g data        12.10                                 
  vm-102-disk-0 pve Vwi-aotz--    4.00g data        30.90                                 
  vm-102-disk-1 pve Vwi-aotz--  900.00g data        85.13                                 
  vm-102-disk-2 pve Vwi-aotz--   64.00g data        3.04                                   
  vm-103-disk-0 pve Vwi-aotz--    2.00g data        79.79                                 
  vm-104-disk-0 pve Vwi-aotz--    4.00g data        56.52                                 
  vm-105-disk-0 pve Vwi-aotz--   22.00g data        47.76
 
Looking at vm-102-disk-1 I feel like it matches pretty closely. 85% of 900G is 765G and df says 751G are used.
 
I understand, but the usage hasn’t changed at all after deleting files. I removed about 250 GB of media (some via the Cockpit GUI and some via the command line) and it didn’t free up any space.
 
Try again. Run watch -n1 lvs on the node, then go inside the CT and run this and monitor how Data% on the node changes.
Wait 10 seconds or so between the commands. Note that fallocate will not work for this.
Bash:
dd status=progress if=/dev/zero of=/shareddata/test.bin bs=1M count=4096

# Wait 10s or so
rm -f /shareddata/test.bin

Also make sure discard is actually applied and not pending. See pct config 102 --current or check the color on the CT's Resources tab.
 
Last edited:
It appears that I had a .Trash folder that was holding all the deleted files. Nevertheless, thank you very much for your help, it was much appreciated!
 
  • Like
Reactions: Impact
Glad I could be of help. I like gdu (apt install gdu; gdu /) to interactively check space usage. Might be helpful in the future.
 
  • Like
Reactions: marfab