fstrim fails to free space on thinpool

illustris

Active Member
Sep 14, 2018
22
4
43
35
I have a few VM and contianer disks on a thinpool. One of the VMs has two disks, 128 and 256 GBs in size.
Code:
# lsblk
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                  8:0    0  128G  0 disk
`-sda1               8:1    0 18.6G  0 part
  |-turnkey-root   254:0    0   17G  0 lvm  /
  `-turnkey-swap_1 254:1    0  512M  0 lvm  [SWAP]
sdb                  8:16   0  256G  0 disk /s_logs
sr0                 11:0    1 1024M  0 rom

Code:
# df -h
Filesystem                Size  Used Avail Use% Mounted on
udev                       16G     0   16G   0% /dev
tmpfs                     3.2G  326M  2.9G  11% /run
/dev/mapper/turnkey-root   17G  1.5G   15G  10% /
tmpfs                      16G     0   16G   0% /dev/shm
tmpfs                     5.0M     0  5.0M   0% /run/lock
tmpfs                      16G     0   16G   0% /sys/fs/cgroup
/dev/sdb                  251G   65G  175G  28% /s_logs

Running fstrim / && fstrim /s_logs doesn't seem to have any effect on the disk utilization as seen by the host

Code:
# lvs thinpoolname
  LV            VG         Attr       LSize   Pool       Origin Data%  Meta%  Move Log Cpy%Sync Convert
  thinpoolname    thinpoolname twi-aotz--  <1.75t                   77.12  48.37
  vm-168-disk-0 thinpoolname Vwi-aotz-- 128.00g thinpoolname        89.58
  vm-168-disk-1 thinpoolname Vwi-aotz-- 288.00g thinpoolname        24.03
  vm-170-disk-0 thinpoolname Vwi-aotz-- 128.00g thinpoolname        99.18
  vm-170-disk-1 thinpoolname Vwi-aotz-- 256.00g thinpoolname        99.68
  vm-178-disk-0 thinpoolname Vwi-aotz--  36.00g thinpoolname        99.78
  vm-178-disk-1 thinpoolname Vwi-aotz--  16.00g thinpoolname        17.86
  vm-178-disk-2 thinpoolname Vwi-aotz--   1.00t thinpoolname        74.94
  vm-180-disk-0 thinpoolname Vwi-aotz--  32.00g thinpoolname        20.13

VM config:
Code:
# cat /etc/pve/qemu-server/170.conf
agent: 1
balloon: 4096
bootdisk: scsi0
cores: 8
hotplug: disk,network,usb
ide2: none,media=cdrom
memory: 32768
name: hostname
net0: virtio=68:30:0E:FD:FA:7A,bridge=vmbr0,firewall=1
numa: 1
ostype: l26
scsi0: thinpoolname:vm-170-disk-0,discard=on,format=raw,size=128G,ssd=1
scsi1: thinpoolname:vm-170-disk-1,discard=on,format=raw,size=256G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=0001d800-4b58-b569-bf4e-a5a7fd7c753c
sockets: 2
vmgenid: a307605c-4005-f123-bb5b-14d1390be4a9

Because of this, despite only using 82GB of space, the VM is using up 384GB of space on the thinpool.
I feel like I'm missing something obvious here. Anyone have any ideas?
 
* What's the output of `fstrim -v / ; fstrim -v /s_logs'?
* Sometimes it helps to fill the disks with zeroes and then remove the file before running fstrim:
Code:
dd if=/dev/zero of=/zeros bs=64M
rm /zeros
fstrim -v /
(same with /s_logs)

I hope this helps
 
Wrote a file to s_logs, deleted it, ran trim, and it did nothing.
Code:
# pv --rate --bytes /dev/zero | dd of=/s_logs/zeros
^C96MiB [68.3MiB/s]
1681507+0 records in
1681507+0 records out
860931584 bytes (861 MB, 821 MiB) copied, 14.4602 s, 59.5 MB/s

# fstrim -v / ; fstrim -v /s_logs
/: 400.7 MiB (420118528 bytes) trimmed
/s_logs: 0 B (0 bytes) trimmed

# rm /s_logs/zeros

# fstrim -v / ; fstrim -v /s_logs
/: 0 B (0 bytes) trimmed
/s_logs: 0 B (0 bytes) trimmed

However, running sync before trimming worked.

This is weird... It started when I migrated the VM from one node to another. It copied over even empty blocks, and marked them as used. fstrim would do nothing in this case. Is there a more permanent way to solve this that filling and trimming the disk every time I migrate VMs?
 
# pv --rate --bytes /dev/zero | dd of=/s_logs/zeros ^C96MiB [68.3MiB/s] 1681507+0 records in 1681507+0 records out 860931584 bytes (861 MB, 821 MiB) copied, 14.4602 s, 59.5 MB/s
Please try to fill the complete disk (also use a larger blocksize for dd (since this usually improves speed)

run trim after removing the file filled with zeros