[SOLVED] fstrim not working for lvm-thin volume mounted in lxc

gsmitty

New Member
Jan 15, 2024
7
1
3
According to the Turnkey File Server I have running on one of my LXCs (ID=101), I am using 1.2T of the total 2.9T capacity of the lvm-thin volume I have mounted.

1705562556817.png


The GUI, however, shows that I'm using 2.7TB of that volume.

1705562384149.png

After reading numerous other threads, I ran:

# pct fstrim 101

That command results in the CLI telling me that 1.7TB is cleared, which is the expected amount after I manually deleted files in the .recycle directory using the shell on the LXC.

After the fstrim, the disk usage in the GUI for that volume isn't changed (still showing 2.7TB used). A reboot also does nothing to change the GUI and after a reboot, fstrim clears the 1.7TB again (as if I hadn't done it to begin with).

Any thoughts on whats happening? Did I screw up by manually (with the cli) deleting files in the /mnt/mydata/.recycle directory instead of using fstrim to begin with? If so, how do I fix it?

I've only been using PVE for about a month and the concept of LVM is pretty new to me. I didn't know until today that fstrim existed.

Any help is appreciated and I'm happy to give any outputs that might help diagnose.

Thanks
 
Last edited:
Hi,

After reading numerous other threads, I ran:

# pct fstrim 101
That command results in the CLI telling me that 1.7TB is cleared, which is the expected amount after I manually deleted files in the .recycle directory using the shell on the LXC.

After the fstrim, the disk usage in the GUI for that volume isn't changed (still showing 2.7TB used)
I'd suggest reading up on what fstrim actually is and does, e.g. the Arch wiki has a concise explanation, or Red Hat.

But tl;dr; It just tells the SSD (and/or thin-volume provider) which blocks are unused - it does not and cannot magically free some space.

So LXC 101 uses ~1.2TiB of data as you say correcly, but I assume all the other LXCs also have their disk on the same storage?
As no storage other seems to be used much, so I'd assume the other ~1.5TiB are coming from the other guests.

If you look under VM Disks and CT Volumes in the sidebar, you get an overview of all existing disks.
For an closer inspection, the command lvs -a can be used, which gives an overview of all thin-volumes provisioned, with their actual usage.
 
Thanks for the reply!

Actually, all my other CTs/VMs have their root storage on a separate ZFS pool. This LXC is the file server where I mounted the disk storing my media collection. The collection itself is right at 1.2TB. The 1.7TB that is “free” according to fstrim is space that I freed up manually by deleting files from the lvm-thin volume (mydata) at /mnt/mydata/.recycle

The OS on the LXC shows that I’m only using about 40% of that’s drive’s capacity. However the GUI of my host (showing disk usage in my screenshot) says I’m at over 80%. I’d like the GUI to accurately show actual usage since there is 1.7TB more capacity than it’s currently showing.
 
Last edited:
Hi,



I'd suggest reading up on what fstrim actually is and does, e.g. the Arch wiki has a concise explanation, or Red Hat.

But tl;dr; It just tells the SSD (and/or thin-volume provider) which blocks are unused - it does not and cannot magically free some space.

So LXC 101 uses ~1.2TiB of data as you say correcly, but I assume all the other LXCs also have their disk on the same storage?
As no storage other seems to be used much, so I'd assume the other ~1.5TiB are coming from the other guests.

If you look under VM Disks and CT Volumes in the sidebar, you get an overview of all existing disks.
For an closer inspection, the command lvs -a can be used, which gives an overview of all thin-volumes provisioned, with their actual usage.
Output of lvs -a

1705587413265.png

This shows that I'm only using about 41.75% of the capacity of the vm-disk, but the thinpool (thpl-media) itself is still at 82.64% usage. Is there a way to get these to match so that I can see the capacity of my thinpool in my host GUI?

My main reason for wanting this is backups. I use PBS and I don't want my server attempting to back up 2.7TB when the backup should only be about half that size. I've already attempted the backup the time difference between a 1.6TB and 2.7 TB backup is significant.

Thanks again
 
Last edited:
It looks like you still have a snapshot of the VM disk, snap_vm-101-disk-1_vzdump.

If this snapshot was created before you cleared the 1.7TiB of data - which I'd assume based on the information -, then that data is of course still stored as part of the snapshot.
You can delete that snapshot, which should free the "missing" space.
 
  • Like
Reactions: gsmitty
It looks like you still have a snapshot of the VM disk, snap_vm-101-disk-1_vzdump.

If this snapshot was created before you cleared the 1.7TiB of data - which I'd assume based on the information -, then that data is of course still stored as part of the snapshot.
You can delete that snapshot, which should free the "missing" space.

That did it! Thank you so much for your help! I've been blown away with the timeliness of responses and quality of support on this forum. I really appreciate it.
 
  • Like
Reactions: cheiss
Great to hear that did it!

Please just mark the thread as SOLVED by editing the first post, so that others with the same problem can find it more easily in the future! :)
 
  • Like
Reactions: gsmitty

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!