Hi, we have migrated several VMs (Linux and Windows) from ESXI, ans most things work well. We are using local storage only, ZFS with "mirror" on single direct NVME or LVM-thin with our older HW-RAID based Dell Servers.
However, I am confused why all disks of our fileserver still show the storage completely allocated, i.e. a 1 TB disk with 250GB data on it takes 1TB on zfs or lvm-thin.
I now understood that when importing ob restoring via VEEAM (we did both) it seems always 100% storage is allocated and I need to "repair" this manually.
Just running an fstrim in the guest seems not to be enough. I now tried the following:
Happy to get a hint what I am doing wrong or missing. JC
However, I am confused why all disks of our fileserver still show the storage completely allocated, i.e. a 1 TB disk with 250GB data on it takes 1TB on zfs or lvm-thin.
I now understood that when importing ob restoring via VEEAM (we did both) it seems always 100% storage is allocated and I need to "repair" this manually.
Just running an fstrim in the guest seems not to be enough. I now tried the following:
- Issued a "cd /partition; dd if=/dev/zero of=./zeros; sync; rm ./zeros" in the vm on a 1TB filesystem with 250GB data in it sitting on an LVM-thin volume
- Issued "fstrim -av" which shows that 750GB of data are now "trimmed", so this worked
- Checked with lvdisplay: Volume usage still 100%
Happy to get a hint what I am doing wrong or missing. JC
Last edited: