VM's disk doesn't seem to trim

RickyM

New Member
Dec 3, 2022
9
0
1
Hi,

My LVM-Thin is getting larger by the day and very fast, I'm almost out of disk space.
I have only one VM (Home Assistant) and nothing else.
Disk space usage only goes up and never goes down.
The discard option is checked and SSD emulation also. I use a SSD, not a HDD.
I found the following guides.

https://gist.github.com/hostberg/86bfaa81e50cc0666f1745e1897c0a56
https://opensource.com/article/20/2/trim-solid-state-storage-linux

But, when I type the command fstrim -av it seems that the VM's drive is not being trimmed.

root@NUC:~# fstrim -av
/boot/efi: 510.7 MiB (535465984 bytes) trimmed on /dev/sda2
/: 231.3 MiB (242548736 bytes) trimmed on /dev/mapper/pve-root

When I look in etc/fstab I don't see the VM's drive, is this why it isn't trimmed?
Should I somehow add the VM's disk here?

# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=824B-3413 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

I found the following post, does this refer to etc/fstab? How can I mount the file system within the guest?

https://forum.proxmox.com/threads/trim-ssds.46398/post-220577
 
Last edited:
LVM thin uses a much larger extent size in comparision to your filesystem blocksize, so that a simple trim in the guest does not yield a perfectly trimmed result. The internal size is IIRC 2M (512x 4K block) and each extent is "used" if only one single 4K block is in there. So in the worst case, you will not see any trim activity at all. For the best "trimmable" filesystem, I would recommend to use ZFS with a volblocksize that matches your internal filesystem blocksize. With this, you will have a 1:1 mapping and the maximum possible. This will have other drawbacks (and features) but is a completely different beast than LVM-thin.

Besides that, you can also try to overwrite your free space with zeros to implicitly trim the free space. In my experience, this works even better than just trimming in my experience.
 
Thank you for your suggestions!
I'll look into overwriting the free space with zero and hope this will fix the issue.
The other suggestion seems complicated, I was very happy that I had accomplished this setup :)
 
Did you actually trim inside the guest? "root@NUC:~# fstrim -av" sounds like you try to fstrim on your PVE host and not inside the HomeAssistant VM. Your HomeAssistant OS has to trim/discard too.
 
Last edited:
I tried the command in HA also, this is the result:

[core-ssh ~]$ fstrim -av
fstrim: unrecognized option: a
BusyBox v1.35.0 (2022-07-18 12:23:02 UTC) multi-call binary.

Usage: fstrim [OPTIONS] MOUNTPOINT

-o OFFSET Offset in bytes to discard from
-l LEN Bytes to discard
-m MIN Minimum extent length
-v Print number of discarded bytes
 
LVM thin uses a much larger extent size in comparision to your filesystem blocksize, so that a simple trim in the guest does not yield a perfectly trimmed result. The internal size is IIRC 2M (512x 4K block) and each extent is "used" if only one single 4K block is in there. So in the worst case, you will not see any trim activity at all. For the best "trimmable" filesystem, I would recommend to use ZFS with a volblocksize that matches your internal filesystem blocksize. With this, you will have a 1:1 mapping and the maximum possible. This will have other drawbacks (and features) but is a completely different beast than LVM-thin.

Besides that, you can also try to overwrite your free space with zeros to implicitly trim the free space. In my experience, this works even better than just trimming in my experience.
It seems that I need to make the disk read only to let zerofree overwrite the free space with zeros, because it can't work with an active file system.
I'm not quite sure how to do this.
Is the last line in fstab correct?

# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=824B-3413 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
/dev/sda3 none LVM2_member ro 0 0

I got the type LVM2_member from the command lsblk.
 
It seems that I need to make the disk read only to let zerofree overwrite the free space with zeros, because it can't work with an active file system.
I'm not quite sure how to do this.
Is the last line in fstab correct?

# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=824B-3413 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
/dev/sda3 none LVM2_member ro 0 0

I got the type LVM2_member from the command lsblk.
I never use zerofree because of this. Just overwrite the free space inside of your guest with dd:

Code:
dd if=/dev/zero of=/zero bs=64k; sync; sync; sync; rm -f /zero
 
I never use zerofree because of this. Just overwrite the free space inside of your guest with dd:

Code:
dd if=/dev/zero of=/zero bs=64k; sync; sync; sync; rm -f /zero
Thank you!
Noob question: can I just copy/paste this code or do I need to alter it somehow for my own setup?
And just to be sure, when you say "inside of your guest", do you mean inside my NUC's shell or the VM (home assistant) CLI?
I think the NUC is the host and the VM is the guest, just checking to be sure.
 
Last edited:
And just to be sure, when you say "inside of your guest", do you mean inside my NUC's shell or the VM (home assistant) CLI?
I think the NUC is the host and the VM is the guest, just checking to be sure.
Yes, guest = VM (home assistant) in your case.

can I just copy/paste this code or do I need to alter it somehow for my own setup?
The command assumes a single filesystem layout. You may want to rerun it for every (real) filesystem mounted. Please check with df.
 
Yes, guest = VM (home assistant) in your case.


The command assumes a single filesystem layout. You may want to rerun it for every (real) filesystem mounted. Please check with df.
Thanks!

This is the result of df:

root@NUC:~# df df: /mnt/pve/PC-Ricky: Host is down Filesystem 1K-blocks Used Available Use% Mounted on udev 3980044 0 3980044 0% /dev tmpfs 802748 996 801752 1% /run /dev/mapper/pve-root 28465204 3171904 23822020 12% / tmpfs 4013728 46800 3966928 2% /dev/shm tmpfs 5120 0 5120 0% /run/lock /dev/sda2 523244 328 522916 1% /boot/efi /dev/fuse 131072 16 131056 1% /etc/pve //192.168.1.1/Proxmox 500106236 178475548 321630688 36% /mnt/pve/extbackup tmpfs 802744 0 802744 0% /run/user/0

Is this more then one file system? If so, how can I alter your code to rerun for every file system?
I only have one SSD with Proxmox installed and just one VM, Home Assistant.

With df sda3 is not shown, sda3 is the VM, the LVM-Thin drive where I'd like to write the free space with zero's.

root@NUC:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 111.8G 0 disk ├─sda1 8:1 0 1007K 0 part ├─sda2 8:2 0 512M 0 part /boot/efi └─sda3 8:3 0 111.3G 0 part ├─pve-swap 253:0 0 7G 0 lvm [SWAP] ├─pve-root 253:1 0 27.8G 0 lvm / ├─pve-data_tmeta 253:2 0 1G 0 lvm │ └─pve-data-tpool 253:4 0 60.7G 0 lvm │ ├─pve-data 253:5 0 60.7G 1 lvm │ ├─pve-vm--100--disk--0 253:6 0 4M 0 lvm │ └─pve-vm--100--disk--1 253:7 0 50G 0 lvm └─pve-data_tdata 253:3 0 60.7G 0 lvm └─pve-data-tpool 253:4 0 60.7G 0 lvm ├─pve-data 253:5 0 60.7G 1 lvm ├─pve-vm--100--disk--0 253:6 0 4M 0 lvm └─pve-vm--100--disk--1 253:7 0 50G 0 lvm sdb 8:16 1 15.3M 0 disk
 
Last edited:
This is the HOST not the GUEST
Ow yes, I'm sorry!

This is the result of df in the guest. Can I just run your code in the guest, or do I need to alter it because there are more file systems?


[core-ssh ~]$ df Filesystem 1K-blocks Used Available Use% Mounted on overlay 50861792 34094940 14663280 70% / devtmpfs 2005744 0 2005744 0% /dev tmpfs 2008048 0 2008048 0% /dev/shm /dev/sda8 50861792 34094940 14663280 70% /ssl /dev/sda8 50861792 34094940 14663280 70% /backup /dev/sda8 50861792 34094940 14663280 70% /share /dev/sda8 50861792 34094940 14663280 70% /media /dev/sda8 50861792 34094940 14663280 70% /data /dev/sda8 50861792 34094940 14663280 70% /config /dev/sda8 50861792 34094940 14663280 70% /addons tmpfs 803220 1500 801720 0% /run/dbus /dev/sda8 50861792 34094940 14663280 70% /etc/asound.conf /dev/sda8 50861792 34094940 14663280 70% /run/audio /dev/sda8 50861792 34094940 14663280 70% /etc/hosts /dev/sda8 50861792 34094940 14663280 70% /etc/resolv.conf /dev/sda8 50861792 34094940 14663280 70% /etc/hostname tmpfs 2008048 0 2008048 0% /dev/shm /dev/sda8 50861792 34094940 14663280 70% /etc/pulse/client.conf tmpfs 2008048 0 2008048 0% /proc/asound tmpfs 2008048 0 2008048 0% /proc/acpi devtmpfs 2005744 0 2005744 0% /proc/kcore devtmpfs 2005744 0 2005744 0% /proc/keys devtmpfs 2005744 0 2005744 0% /proc/timer_list tmpfs 2008048 0 2008048 0% /proc/scsi tmpfs 2008048 0 2008048 0% /sys/firmware
 
Just an update:

For me the solution was partly found in the second link.
Only for me the command fstrim -av worked better, but the most important aspect is typing login in the Home Assistant CLI from within Proxmox. A # appears and there I typed fstrim -av.
I also found a folder in Home Assistant called Backup where a lot of backups where compounding, that cleared a lot of space for me also. After deleting those backups I used fstrim again.
To be safe I stopped the core in Home Assistant before running fstrim.

Overwriting the free space with zeros didn't work for some reason. After a while it stops with an error that there is no free space available. No space was freed up after.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!