[SOLVED] - ZFS disk usage too high?

mandibleman

Member
Feb 15, 2019
5
1
8
41
I'm seeing a problem on one of our Proxmox (5.4-13) systems that is using ZRAID-2. The free space on the server dropped to 0 and halted all of the guest VMs. After deleting an expendable VM, the other guests could start again. When investigating why the disk usage dropped to 0, I came across something that I couldn't figure out. zfs list shows:
Code:
NAME                       USED  AVAIL  REFER  MOUNTPOINT
rpool                     8.68T  1.57T   222K  /rpool
rpool/ROOT                1.49G  1.57T   205K  /rpool/ROOT
rpool/ROOT/pve-1          1.49G  1.57T  1.49G  /
rpool/data                8.67T  1.57T   205K  /rpool/data
rpool/data/vm-100-disk-0  1.38T  1.57T  1.38T  -
rpool/data/vm-101-disk-0  1.68T  1.57T  1.68T  -
rpool/data/vm-102-disk-0  2.71T  1.57T  2.71T  -
rpool/data/vm-104-disk-0  1.41T  1.57T  1.41T  -
rpool/data/vm-105-disk-0  1.50T  1.57T  1.50T  -
So, I have a few questions:
  1. All of the guests show disk usage at ~150GB, why are the "used" sizes at 1.5TB? Or in the case of vm-102, why or how could the used size of 2.7TB be greater than the provisioned disk size of 1.6TB?
  2. Is there anything that we can do to shrink the size of the volumes?
  3. Is there something we should configure to prevent this from happening in the future?
Thanks in advance!
 
Thanks for your help!
I actually managed to resolve the problem by following the "recipe" here
  1. change disk controller to VirtIO SCSI
  2. enable "Discard" on the guest disk
  3. schedule TRIM in the guest OS:
    1. Windows (in admin powershell): Optimize-Volume -DriveLetter C -ReTrim -Verbose
    2. Linux: fstrim -av
And like magic, the zfs usage went from ~90% down to ~35%
 
  • Like
Reactions: ozgurerdogan