Hi,
I've got a debian 11 VM which uses 3 virtio-scsi disks with 20G/1T/3T size and discard on. First one is for the system, second and third for storage, with ext4 file system.
All disk data is located in a thin provisioned zfs raid-z2 pool as raw dataset.
zfs list output:
350G used/refer but data usage inside the file system is only 268G for disk two, 2.79T vs 2.2T for disk three.
I tried fstrim -av; writing zeros with dd and deleting the file afterwards, sync flushes - nothing reduced the 350G zfs dataset.
Then I moved the second disk to another (also zfs thin provisioned) datapool (it's now called disk-1):
As you can see the additional space was freed in the process.
I'm currently moving the data back to the initial pool (will take some hours) to see if the usage stays low (to rule out some pool differences I'm not aware of causing the overhead) .
Also the virtual disks get replicated every minute to another node, on the target node they are also oversized.
I'm asking myself is there is any way to reclaim the unused space without having to migrate the virtual disk between zfs pools?
I've got a debian 11 VM which uses 3 virtio-scsi disks with 20G/1T/3T size and discard on. First one is for the system, second and third for storage, with ext4 file system.
Code:
Filesystem Size Used Avail Use% Mounted on
/dev/sdc1 16G 6.3G 8.6G 43% /
/dev/sda1 1007G 268G 740G 27% /mnt/storage1
/dev/sdb1 3.0T 2.2T 789G 74% /mnt/storage2
...
All disk data is located in a thin provisioned zfs raid-z2 pool as raw dataset.
zfs list output:
Code:
NAME USED AVAIL REFER MOUNTPOINT
vmpool/vm-107-disk-0 7.79G 3.30T 7.79G -
vastank/bulkpool/vm-107-disk-0 350G 6.88T 350G -
vastank/bulkpool/vm-107-disk-1 2.79T 6.88T 2.79T -
350G used/refer but data usage inside the file system is only 268G for disk two, 2.79T vs 2.2T for disk three.
I tried fstrim -av; writing zeros with dd and deleting the file afterwards, sync flushes - nothing reduced the 350G zfs dataset.
Then I moved the second disk to another (also zfs thin provisioned) datapool (it's now called disk-1):
Code:
NAME USED AVAIL REFER MOUNTPOINT
vmpool/vm-107-disk-1 266G 3.04T 266G -
As you can see the additional space was freed in the process.
I'm currently moving the data back to the initial pool (will take some hours) to see if the usage stays low (to rule out some pool differences I'm not aware of causing the overhead) .
Also the virtual disks get replicated every minute to another node, on the target node they are also oversized.
I'm asking myself is there is any way to reclaim the unused space without having to migrate the virtual disk between zfs pools?