I used to be able to reclaim free space of VM's when I was still using ZFS, but with RBD images on Ceph it does not seem to work properly, I manage to reclaim some of the free space but not all. I have the same issue with VM's and Containers. I have discard enabled and the follow the following steps:
First I fill the container's free space with zeros using the following command:
dd if=/dev/zero of=/tmp/bigfile bs=1M; rm -f /tmp/bigfile
Then I do:
fstrim -v /
and the output of that shows:
/: 66.7 GiB (71601565696 bytes) trimmed
and then "df -h" shows:
Filesystem Size Used Avail Use% Mounted on
/dev/rbd0 196G 99G 89G 53% /
But on proxmox cli "rbd du --pool ceph-ssd | grep 105" shows:
vm-105-disk-0 200 GiB 139 GiB
Why do I have 139GB used when only 99G is used in the Container? Am I doing something wrong? It works on images stored on ZFS.
First I fill the container's free space with zeros using the following command:
dd if=/dev/zero of=/tmp/bigfile bs=1M; rm -f /tmp/bigfile
Then I do:
fstrim -v /
and the output of that shows:
/: 66.7 GiB (71601565696 bytes) trimmed
and then "df -h" shows:
Filesystem Size Used Avail Use% Mounted on
/dev/rbd0 196G 99G 89G 53% /
But on proxmox cli "rbd du --pool ceph-ssd | grep 105" shows:
vm-105-disk-0 200 GiB 139 GiB
Why do I have 139GB used when only 99G is used in the Container? Am I doing something wrong? It works on images stored on ZFS.