Discard Not Freeing Up Space

sandrockcstm

New Member
Jun 4, 2020
2
0
1
36
I have the "discard" option set for my OMV installation but it's not marking the drives as having free space after I delete large files. I'm using VirtIO SCSI as my SCSI controller.
 
You have to enable discard in the VM as well. For Linux you can periodically run "fstrim" (e.g. with cron job) or mount the drive with "discard" option.

^This.

I actually wrote an article a while back, LVM, Thin Provisioning, and Monitoring Storage Use: A Case Study, where I had managed to miss setting the discard option on all of my VMs and so I was really chewing through storage. You'll still need to run fstrim inside Linux VMs (or just regular trim inside Windows).
 
Ah, looks like this might be a quirk of OMV then. Fstrim is not found when I try to run it on the command line. I'll move this over to the OMV forums. Thanks!
 
Hi, I have similar issue. I have just changed the controller to Virtio SCSI and attached disks as SCSI and few times powered off/on the VM. But fstrim keeps saying it is trimming, but nothing changes.

1. Configuration of the VM
root@pve:~# cat /etc/pve/qemu-server/102.conf
agent: 1
boot: cdn
bootdisk: scsi0
cores: 4
ide2: none,media=cdrom
memory: 16000
name: ZIMBRA
net1: virtio=E2:46:9A:C3:3E:FC,bridge=vmbr3,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: raid-lvm:vm-102-disk-0,discard=on,size=64G
scsi1: raid-lvm:vm-102-disk-1,discard=on,size=64G
scsihw: virtio-scsi-pci
smbios1: uuid=59ae58f2-8715-4c92-bcfa-fd701356df49
sockets:

2. Disks :vm-102-disk-0 and vm-102-disk-1 are in the thin LVM:
lsblk:
..
sdb 8:16 0 931.5G 0 disk
`-sdb1 8:17 0 931.5G 0 part
`-md0 9:0 0 931.4G 0 raid1
|-pve--lvm--raid-data--raid_tmeta 253:7 0 84M 0 lvm
| `-pve--lvm--raid-data--raid-tpool 253:9 0 642.7G 0 lvm
| |-pve--lvm--raid-data--raid 253:10 0 642.7G 0 lvm
| |-pve--lvm--raid-vm--105--disk--1 253:11 0 32G 0 lvm
| |-pve--lvm--raid-vm--103--disk--1 253:12 0 32G 0 lvm
| |-pve--lvm--raid-vm--102--disk--0 253:13 0 64G 0 lvm
| |-pve--lvm--raid-vm--102--disk--1 253:14 0 64G 0 lvm
| `-pve--lvm--raid-vm--106--disk--1 253:15 0 40G 0 lvm
`-pve--lvm--raid-data--raid_tdata 253:8 0 642.7G 0 lvm
`-pve--lvm--raid-data--raid-tpool 253:9 0 642.7G 0 lvm
|-pve--lvm--raid-data--raid 253:10 0 642.7G 0 lvm
|-pve--lvm--raid-vm--105--disk--1 253:11 0 32G 0 lvm
|-pve--lvm--raid-vm--103--disk--1 253:12 0 32G 0 lvm
|-pve--lvm--raid-vm--102--disk--0 253:13 0 64G 0 lvm
|-pve--lvm--raid-vm--102--disk--1 253:14 0 64G 0 lvm
`-pve--lvm--raid-vm--106--disk--1 253:15 0 40G 0 lvm
sdc 8:32 0 931.5G 0 disk
`-sdc1 8:33 0 931.5G 0 part
`-md0 9:0 0 931.4G 0 raid1
|-pve--lvm--raid-data--raid_tmeta 253:7 0 84M 0 lvm
| `-pve--lvm--raid-data--raid-tpool 253:9 0 642.7G 0 lvm
| |-pve--lvm--raid-data--raid 253:10 0 642.7G 0 lvm
| |-pve--lvm--raid-vm--105--disk--1 253:11 0 32G 0 lvm
| |-pve--lvm--raid-vm--103--disk--1 253:12 0 32G 0 lvm
| |-pve--lvm--raid-vm--102--disk--0 253:13 0 64G 0 lvm
| |-pve--lvm--raid-vm--102--disk--1 253:14 0 64G 0 lvm
| `-pve--lvm--raid-vm--106--disk--1 253:15 0 40G 0 lvm
`-pve--lvm--raid-data--raid_tdata 253:8 0 642.7G 0 lvm
`-pve--lvm--raid-data--raid-tpool 253:9 0 642.7G 0 lvm
|-pve--lvm--raid-data--raid 253:10 0 642.7G 0 lvm
|-pve--lvm--raid-vm--105--disk--1 253:11 0 32G 0 lvm
|-pve--lvm--raid-vm--103--disk--1 253:12 0 32G 0 lvm
|-pve--lvm--raid-vm--102--disk--0 253:13 0 64G 0 lvm
|-pve--lvm--raid-vm--102--disk--1 253:14 0 64G 0 lvm
`-pve--lvm--raid-vm--106--disk--1 253:15 0 40G 0 lvm

lvs
root@pve:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 202.72g 1.27 11.03
root pve -wi-ao---- 74.25g
swap pve -wi-ao---- 5.00g
vm-101-disk-0 pve Vwi-aotz-- 32.00g data 8.04
data-raid pve-lvm-raid twi-aotz-- 642.65g 13.98 17.14
vm-102-disk-0 pve-lvm-raid Vwi-aotz-- 64.00g data-raid 34.93
vm-102-disk-1 pve-lvm-raid Vwi-aotz-- 64.00g data-raid 48.47

vm-103-disk-1 pve-lvm-raid Vwi-aotz-- 32.00g data-raid 11.35
vm-105-disk-1 pve-lvm-raid Vwi-a-tz-- 32.00g data-raid 33.23
vm-106-disk-1 pve-lvm-raid Vwi-aotz-- 40.00g data-raid 55.50

3. Running fstrim always reports the same value:
[root@mail ~]$fstrim -av
/opt: 5,4 GiB (5807054848 bytes) trimmed
/boot: 220,6 MiB (231297024 bytes) trimmed
/: 20,1 GiB (21550411776 bytes) trimmed

[root@mail ~]$df -h
Filesystem Size Used Avail Use% Mounted on
..
/dev/mapper/centos-root 23G 2,8G 21G 13% /
/dev/sda1 497M 302M 196M 61% /boot
/dev/mapper/vg_opt-lv_opt 31G 26G 5,7G 82% /opt

Backup is 37GB so it is not trimming (2,8+26GB).

4. VM is Centos 7 Linux mail 3.10.0-1127.10.1.el7.x86_64 #1 SMP Wed Jun 3 14:28:03 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
PVE version pve-manager/5.4-15/d0ec33c6 (running kernel: 4.15.18-29-pve)

It the problem in the MDRAID?
 
You're running mdraid in a VM? That's different. It is another layer that has to pass on the discard requests. Have you tried enabling ssd emulation for the VM?

ETA: CentOS 7 has a kind of old kernel. Perhaps old enough that mdraid doesn't support trim/discard?
 
Last edited:
Just verified that my CentOS 8 VM behaves similarly. I have a default setup with an xfs root and swap on lvm (no raid) and an ext4 boot directly on a partition. Doing fstrim -av always reports all free space on the xfs as being trimmed, but only a small number for the ext4.

The VM storage is on lvm and lvs reports roughly the correct in-use percentage.
 
I am running mdraid on the host (I know it is not officially supported). Furthermore I am using older HDDs and these lack the TRIM support. I guess I will have to use the old way with dd and zeroing..
 
I am running mdraid on the host (I know it is not officially supported). Furthermore I am using older HDDs and these lack the TRIM support. I guess I will have to use the old way with dd and zeroing..

Ok. I also use mdraid on the host so I don't think that's an issue. But yeah, the underlying storage has to support trim.

FWIW, I spun up a CentOS 8 VM with default settings (boot on /dev/sda1, / on /dev/mapper/cl-root) except that I changed the rootfs from xfs to ext4. The first time I ran fstrim I got:

Code:
/boot: 845.2 MiB (886226944 bytes) trimmmed
/: 23.8 GiB (25579839488 bytes) trimmed

The second time I got:

Code:
/boot: 0 B (0 bytes) trimmed
/: 0 B (0 bytes) trimmed

Which is consistent with my Debian VM's that don't use LVM but do use ext4. Whereas the CentOS VM that uses the default xfs root always reports basically all of the free space as trimmed. So it looks like there may be a difference in the way xfs and ext4 report what they did when you run fstrim. Google tells me that if it isn't supported it is supposed to give an error.