fstrim and backup size

Miro_I

Member
Apr 2, 2021
25
2
8
42
Hello, I am running Proxmox 6.2 with few VMs stored on LVM storage backed by HDD raid.
I have one guest with 1TB scsi with Discard disabled (do not want a discard on every delete).
Inside the guest i have LVM partition with currently used 112G of 983GB total size.
In last days i had some data on this partition that was removed and proxmox backup shows:
INFO: backup is sparse: 537.12 GiB (53%) total zero data
INFO: transferred 1010.00 GiB in 6329 seconds (163.4 MiB/s)
Backup compression is set to ZSTD and result is 297GB.

Running fstrim / -v displays some data being trimmed but backup size does not reduce.

Could you tell me how to reduce the backup size ?
 
You have to enable 'discard' on the disk. With it enabled the discard commands are passed through to the underlying storage.
It doesn't automatically discard on every delete, these still have to be sent from the VM.
 
You have to restart the VM (shutdown + start) for it to take effect.
 
I set discard On , stopped the VM, started it, did an fstrim -v / and backup is still very large.
1617425908602.png

When i ran fstrim it reported less bytes than expected (the difference between non-compressed backup size and used space).

Maybe previous fstrim commands when discard was disabled trimmed the guest filesystem.

How to solve the issue? Write 0's on the whole disk ?
 
So, you ran an fstrim in the guest and it improved the situation, but still is not anywhere near your actual usage?
In this case you could try filling your filesystem with 0s.
 
Right. It seems previous fstrims when discard was disabled reported to the guest the fstrim is done and after enabling discard and reboot the fstrim did nothing (as there were not much deletes after reboot).
Filled with zeros, deleted, trimmed and now is OK.
 
  • Like
Reactions: Sourcenux
Glad your issue is solved.
You can mark the thread 'Solved' by clicking on 'Edit thread' above your first post and choosing the prefix '[Solved]'.
 
I still have issues with trim and large backups. Fstrim runs almost instantly for 1-2 seconds and reports for example 90GiB trimmed. Is it possible that such a large data could be trimmed in 1-2 seconds?

Yesterday i did a zerofill of the guest VM as zst backups reached 580GB when guest disk had 124G used.
Immediately after zerofill and cleanup i ran manually the backup and it was 86.02GiB.
16 hours later when auto backup started, the backup size was 305.71GiB:

1619590800529.png

On the guest VM i have fstrim in a hourly cronjob:
[B]59 * * * * echo `date` >> /tmp/trim.log ; /usr/sbin/fstrim -a -v >> /tmp/trim.log[/B]

Here is the trim.log content covering the time between the 2 backups:

Code:
Tue 27 Apr 14:20:49 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 494.9 MiB (518897664 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 747.5 MiB (783761408 bytes) trimmed on /dev/mapper/hdd-root

Tue 27 Apr 14:32:16 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 358.6 MiB (375992320 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 831 GiB (892279078912 bytes) trimmed on /dev/mapper/hdd-root

Tue 27 Apr 14:59:01 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 1 GiB (1098334208 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 1.5 GiB (1596915712 bytes) trimmed on /dev/mapper/hdd-root

Tue 27 Apr 15:59:01 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 1 GiB (1082802176 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 1.4 GiB (1495396352 bytes) trimmed on /dev/mapper/hdd-root

Tue 27 Apr 16:59:01 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 717 MiB (751845376 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 1.3 GiB (1343463424 bytes) trimmed on /dev/mapper/hdd-root

Tue 27 Apr 17:59:01 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 1004.6 MiB (1053392896 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 1.6 GiB (1672511488 bytes) trimmed on /dev/mapper/hdd-root

Tue 27 Apr 18:59:01 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 915.9 MiB (960397312 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 1.9 GiB (2001477632 bytes) trimmed on /dev/mapper/hdd-root

Tue 27 Apr 19:59:01 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 591.1 MiB (619794432 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 872.7 MiB (915087360 bytes) trimmed on /dev/mapper/hdd-root

Tue 27 Apr 20:59:01 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 255.3 MiB (267661312 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 820.9 MiB (860725248 bytes) trimmed on /dev/mapper/hdd-root

Tue 27 Apr 21:59:01 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 253.5 MiB (265764864 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 1.1 GiB (1130905600 bytes) trimmed on /dev/mapper/hdd-root

Tue 27 Apr 22:59:01 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 193.3 MiB (202674176 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 819.2 MiB (858972160 bytes) trimmed on /dev/mapper/hdd-root

Tue 27 Apr 23:59:01 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 60.2 MiB (63102976 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 819.2 MiB (858968064 bytes) trimmed on /dev/mapper/hdd-root

Wed 28 Apr 00:59:01 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 380.8 MiB (399278080 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 2.3 GiB (2430140416 bytes) trimmed on /dev/mapper/hdd-root

Wed 28 Apr 01:59:02 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 193.1 MiB (202514432 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 92.4 GiB (99252297728 bytes) trimmed on /dev/mapper/hdd-root

Wed 28 Apr 02:59:01 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 193.1 MiB (202510336 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 783.3 MiB (821387264 bytes) trimmed on /dev/mapper/hdd-root

Wed 28 Apr 03:59:01 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 193.2 MiB (202575872 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 130.4 GiB (140042088448 bytes) trimmed on /dev/mapper/hdd-root

Wed 28 Apr 04:59:01 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 0 B (0 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 709.7 MiB (744181760 bytes) trimmed on /dev/mapper/hdd-root

Wed 28 Apr 05:59:01 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 498.4 MiB (522559488 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 983.5 MiB (1031225344 bytes) trimmed on /dev/mapper/hdd-root

Wed 28 Apr 06:59:01 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 121.2 MiB (127074304 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 1.6 GiB (1747304448 bytes) trimmed on /dev/mapper/hdd-root

Wed 28 Apr 07:59:01 CEST 2021
/var/lib/mysql: 0 B (0 bytes) trimmed on /dev/mapper/ssd-mysql
/boot: 0 B (0 bytes) trimmed on /dev/sda2
/home/domain/maildir_indexes: 455.9 MiB (477990912 bytes) trimmed on /dev/mapper/ssd-dovecot
/: 1.1 GiB (1143615488 bytes) trimmed on /dev/mapper/hdd-root

Trim on 27 Apr 14:32:16 with 831 GiB was after removing the zerofill files.

The large trims on 28 April 01:59:02 and 03:59:01 are after two backup jobs of local files that move files on remote storage.


I feel that fstrim does not work. Is there any way to debug? How can i check if fstrim does it's job ?
 
is LVM inside the VM configured to pass along discard? (/etc/lvm/lvm.conf)
 
Hello fabian,

issue_discards seems to be default option for LVM in Ubuntu 20.04:

Code:
# ls -lah /etc/lvm/lvm.conf && grep "issue_discards" /etc/lvm/lvm.conf

-rw-r--r-- 1 root root 100K Feb 13  2020 /etc/lvm/lvm.conf
        # Configuration option devices/issue_discards.
        issue_discards = 1
 
I created new test VM with Ubuntu 20.04 LTS image and attached 2 additional 10GB disks:
1619687867478.png

One is SSD backed storage and the other HDD:
1619687945331.png

On the guest i mounted both additional disks as ext4:
1619688289520.png

I filled the SSD mount with random bytes:
Code:
dd if=/dev/urandom of=/mnt/ssd/rand bs=1M
then removed rand file and issued fstrim -av

Created backup of the SSD drive and it was very small - 7MB

I repeated the process with the HDD drive and backup is 9.8GB

Both mount points are empty:
1619688552078.png

Why fstrim does not work for the HDD drive?

Is this an issue with Proxmox or the Guest OS or issue with physical drives on the host?
 
If you ask about HDD that runs LVM - no it does not support trim. But why should it support it while fstrim is issued on guest?
There is no logic having Discard working only with SSDs backend. Then if having HDD-only Proxmox node, the backups will be larger than actual data on VMs.

I did further experiment - manually created lvm partition on Proxmox on the same volume group where lvm block disk of VMID 100 resides. Then mounted it in /mnt/images and moved the disk to this location in .raw format.
Fstrim worked perfect, backup size was small as 8MB.
Moved back to the LVM block storage and fstrim reports 9.8GB trimmed but backup is still 9GB.

So fstrim in this configuration does not reduce the backup size:

1619695625628.png



And in this configuration fstrim reduces the backup size as expected:

1619695971735.png



I would prefer having the VM storage in block devices instead in files with fstrim working.
Just can't understand what is the reason of non-working fstrim on LVM block backends.
 
Hello,
I join this thread as I've similar experiences. I recently discovered discard option and fstrim command. I'm wondering about impact performance and life span of the HDD behind.
For context, I'm running PVE 6.4-13 on NVMe with differents Ubuntu virtual machines, all of them use RAW disk file format..
I've a sync with a remote server which constantly write on one of the VM, which make his disk usage increasing except if I run fstim command every hour.
Except with a cron job, how can I automatically free space unused by the guest ? Should I convert my disk to an another format ?

Thanks for the advice
 
The disk format doesn't change anything.
Typically fstrim is configured to run once a week in the most common linux distributions.

Please provide the VM config (qm config <VMID>) and your storage config (cat /etc/pve/storage.cfg).
 
The disk format doesn't change anything.
Typically fstrim is configured to run once a week in the most common linux distributions.

Please provide the VM config (qm config <VMID>) and your storage config (cat /etc/pve/storage.cfg).
Below VM config:

Code:
boot: order=scsi0
cores: 2
memory: 4096
name: laura
net0: virtio=4E:63:E8:31:F4:5D,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-100-disk-0,size=10G
scsi1: local-lvm:vm-100-disk-1,backup=0,discard=on,size=650G
scsi2: local-lvm:vm-100-disk-2,discard=on,size=11G
scsi3: local-lvm:vm-100-disk-3,size=3G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=8aa2682b-6e4f-4461-ba2f-cba235c78151
sockets: 1
startup: order=1
vmgenid: 64ff41b1-5a4b-4177-94fd-e6800d6cc98c

Below storage.cfg
Code:
dir: local
        path /var/lib/vz
        content backup,iso,vztmpl

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!