Hi, I have similar issue. I have just changed the controller to Virtio SCSI and attached disks as SCSI and few times powered off/on the VM. But fstrim keeps saying it is trimming, but nothing changes.
1. Configuration of the VM
root@pve:~# cat /etc/pve/qemu-server/102.conf
agent: 1
boot: cdn
bootdisk: scsi0
cores: 4
ide2: none,media=cdrom
memory: 16000
name: ZIMBRA
net1: virtio=E2:46:9A:C3:3E:FC,bridge=vmbr3,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: raid-lvm:vm-102-disk-0,discard=on,size=64G
scsi1: raid-lvm:vm-102-disk-1,discard=on,size=64G
scsihw: virtio-scsi-pci
smbios1: uuid=59ae58f2-8715-4c92-bcfa-fd701356df49
sockets:
2. Disks :vm-102-disk-0 and vm-102-disk-1 are in the thin LVM:
lsblk:
..
sdb 8:16 0 931.5G 0 disk
`-sdb1 8:17 0 931.5G 0 part
`-md0 9:0 0 931.4G 0 raid1
|-pve--lvm--raid-data--raid_tmeta 253:7 0 84M 0 lvm
| `-pve--lvm--raid-data--raid-tpool 253:9 0 642.7G 0 lvm
| |-pve--lvm--raid-data--raid 253:10 0 642.7G 0 lvm
| |-pve--lvm--raid-vm--105--disk--1 253:11 0 32G 0 lvm
| |-pve--lvm--raid-vm--103--disk--1 253:12 0 32G 0 lvm
| |-pve--lvm--raid-vm--102--disk--0 253:13 0 64G 0 lvm
| |-pve--lvm--raid-vm--102--disk--1 253:14 0 64G 0 lvm
| `-pve--lvm--raid-vm--106--disk--1 253:15 0 40G 0 lvm
`-pve--lvm--raid-data--raid_tdata 253:8 0 642.7G 0 lvm
`-pve--lvm--raid-data--raid-tpool 253:9 0 642.7G 0 lvm
|-pve--lvm--raid-data--raid 253:10 0 642.7G 0 lvm
|-pve--lvm--raid-vm--105--disk--1 253:11 0 32G 0 lvm
|-pve--lvm--raid-vm--103--disk--1 253:12 0 32G 0 lvm
|-pve--lvm--raid-vm--102--disk--0 253:13 0 64G 0 lvm
|-pve--lvm--raid-vm--102--disk--1 253:14 0 64G 0 lvm
`-pve--lvm--raid-vm--106--disk--1 253:15 0 40G 0 lvm
sdc 8:32 0 931.5G 0 disk
`-sdc1 8:33 0 931.5G 0 part
`-md0 9:0 0 931.4G 0 raid1
|-pve--lvm--raid-data--raid_tmeta 253:7 0 84M 0 lvm
| `-pve--lvm--raid-data--raid-tpool 253:9 0 642.7G 0 lvm
| |-pve--lvm--raid-data--raid 253:10 0 642.7G 0 lvm
| |-pve--lvm--raid-vm--105--disk--1 253:11 0 32G 0 lvm
| |-pve--lvm--raid-vm--103--disk--1 253:12 0 32G 0 lvm
| |-pve--lvm--raid-vm--102--disk--0 253:13 0 64G 0 lvm
| |-pve--lvm--raid-vm--102--disk--1 253:14 0 64G 0 lvm
| `-pve--lvm--raid-vm--106--disk--1 253:15 0 40G 0 lvm
`-pve--lvm--raid-data--raid_tdata 253:8 0 642.7G 0 lvm
`-pve--lvm--raid-data--raid-tpool 253:9 0 642.7G 0 lvm
|-pve--lvm--raid-data--raid 253:10 0 642.7G 0 lvm
|-pve--lvm--raid-vm--105--disk--1 253:11 0 32G 0 lvm
|-pve--lvm--raid-vm--103--disk--1 253:12 0 32G 0 lvm
|-pve--lvm--raid-vm--102--disk--0 253:13 0 64G 0 lvm
|-pve--lvm--raid-vm--102--disk--1 253:14 0 64G 0 lvm
`-pve--lvm--raid-vm--106--disk--1 253:15 0 40G 0 lvm
lvs
root@pve:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 202.72g 1.27 11.03
root pve -wi-ao---- 74.25g
swap pve -wi-ao---- 5.00g
vm-101-disk-0 pve Vwi-aotz-- 32.00g data 8.04
data-raid pve-lvm-raid twi-aotz-- 642.65g 13.98 17.14
vm-102-disk-0 pve-lvm-raid Vwi-aotz-- 64.00g data-raid 34.93
vm-102-disk-1 pve-lvm-raid Vwi-aotz-- 64.00g data-raid 48.47
vm-103-disk-1 pve-lvm-raid Vwi-aotz-- 32.00g data-raid 11.35
vm-105-disk-1 pve-lvm-raid Vwi-a-tz-- 32.00g data-raid 33.23
vm-106-disk-1 pve-lvm-raid Vwi-aotz-- 40.00g data-raid 55.50
3. Running fstrim always reports the same value:
[root@mail ~]$fstrim -av
/opt: 5,4 GiB (5807054848 bytes) trimmed
/boot: 220,6 MiB (231297024 bytes) trimmed
/: 20,1 GiB (21550411776 bytes) trimmed
[root@mail ~]$df -h
Filesystem Size Used Avail Use% Mounted on
..
/dev/mapper/centos-root 23G 2,8G 21G 13% /
/dev/sda1 497M 302M 196M 61% /boot
/dev/mapper/vg_opt-lv_opt 31G 26G 5,7G 82% /opt
Backup is 37GB so it is not trimming (2,8+26GB).
4. VM is Centos 7 Linux mail 3.10.0-1127.10.1.el7.x86_64 #1 SMP Wed Jun 3 14:28:03 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
PVE version pve-manager/5.4-15/d0ec33c6 (running kernel: 4.15.18-29-pve)
It the problem in the MDRAID?