Hi,
I'm using ZFS with 2 nvme drives showing as 2x 3.8TB in RAID1 (mirroring). I can't make sense of storage usage inside the VMs and storage reported by ZFS. (FOUND WORKING SOLUTION, SEE BELOW)
Proxmox GUI shows
zfs list:
Inside vm 203:
Inside vm 206:
Inside vm 301:
Inside vm 302:
Now 494G + 487G + 159G + 161G = 1.3 TB, ZFS shows 2.93TB used, why such huge difference ? I've checked for snapshots, clones or anything that may use storage, but there is nothing, also thin provisioning is enabled and refreservation is none, so it shouldn't use so much space. I also triggered trim manually. I can't find where this usage is coming from.
SOLUTION:
After a bit of research and configuration, I've managed to enable TRIM support such that both the guest and host report the correct space usage, which reflects the actual trimmed and unused space. Here's what I did:
VM Configuration: I ensured that my VM disk controller was set to VirtIO SCSI, which supports TRIM. I also verified that the discard option was enabled for the disks.
Scheduled TRIM: Both the guest and the host systems have fstrim.timer enabled and active, which is scheduled to run weekly. This is a great way to batch discard operations to avoid the performance penalty of synchronous discards.
Manual TRIM Execution: To immediately reclaim space, I ran
on the guest, which triggered the TRIM operation and successfully freed up the unused blocks (this worked only after enabling discard and restarting the VMs).
Host Configuration: Initially, I was concerned about the autotrim setting on my ZFS host due to potential performance issues with synchronous TRIM operations. However, the combination of the discard option in the VMs and the scheduled fstrim on both host and guests seems to handle the TRIM operations effectively.
It's satisfying to see the setup working harmoniously and efficiently managing the SSD space!
I'm using ZFS with 2 nvme drives showing as 2x 3.8TB in RAID1 (mirroring). I can't make sense of storage usage inside the VMs and storage reported by ZFS. (FOUND WORKING SOLUTION, SEE BELOW)
Proxmox GUI shows
zfs list:
Code:
NAME USED AVAIL REFER MOUNTPOINT
rpool 2.93T 428G 104K /rpool
rpool/ROOT 1.87G 428G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.87G 428G 1.87G /
rpool/data 2.92T 428G 96K /rpool/data
rpool/data/vm-101-disk-0 1.07G 428G 1.07G -
rpool/data/vm-203-cloudinit 72K 428G 72K -
rpool/data/vm-203-disk-0 700G 428G 700G -
rpool/data/vm-206-cloudinit 72K 428G 72K -
rpool/data/vm-206-disk-0 854G 428G 854G -
rpool/data/vm-301-cloudinit 72K 428G 72K -
rpool/data/vm-301-disk-0 810G 428G 810G -
rpool/data/vm-302-cloudinit 72K 428G 72K -
rpool/data/vm-302-disk-0 629G 428G 629G -
rpool/var-lib-vz 2.61G 428G 2.61G /var/lib/vz
Inside vm 203:
Code:
Filesystem Size Used Avail Use% Mounted on
tmpfs 2.4G 1.2M 2.4G 1% /run
/dev/sda1 778G 487G 291G 63% /
tmpfs 12G 0 12G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda15 105M 6.1M 99M 6% /boot/efi
tmpfs 2.4G 4.0K 2.4G 1% /run/user/1000
Inside vm 206:
Code:
Filesystem Size Used Avail Use% Mounted on
tmpfs 1.9G 1.5M 1.9G 1% /run
/dev/sda1 958G 494G 464G 52% /
tmpfs 9.4G 0 9.4G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda15 105M 6.1M 99M 6% /boot/efi
tmpfs 1.9G 4.0K 1.9G 1% /run/user/1001
Inside vm 301:
Code:
Filesystem Size Used Avail Use% Mounted on
tmpfs 1.7G 1.3M 1.7G 1% /run
/dev/sda1 972G 159G 813G 17% /
tmpfs 8.3G 0 8.3G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda15 105M 6.1M 99M 6% /boot/efi
tmpfs 1.7G 4.0K 1.7G 1% /run/user/1000
Inside vm 302:
Code:
Filesystem Size Used Avail Use% Mounted on
tmpfs 1.7G 1.3M 1.7G 1% /run
/dev/sda1 972G 161G 811G 17% /
tmpfs 8.3G 0 8.3G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda15 105M 6.1M 99M 6% /boot/efi
tmpfs 1.7G 4.0K 1.7G 1% /run/user/1000
Now 494G + 487G + 159G + 161G = 1.3 TB, ZFS shows 2.93TB used, why such huge difference ? I've checked for snapshots, clones or anything that may use storage, but there is nothing, also thin provisioning is enabled and refreservation is none, so it shouldn't use so much space. I also triggered trim manually. I can't find where this usage is coming from.
SOLUTION:
After a bit of research and configuration, I've managed to enable TRIM support such that both the guest and host report the correct space usage, which reflects the actual trimmed and unused space. Here's what I did:
VM Configuration: I ensured that my VM disk controller was set to VirtIO SCSI, which supports TRIM. I also verified that the discard option was enabled for the disks.
Scheduled TRIM: Both the guest and the host systems have fstrim.timer enabled and active, which is scheduled to run weekly. This is a great way to batch discard operations to avoid the performance penalty of synchronous discards.
Manual TRIM Execution: To immediately reclaim space, I ran
Bash:
sudo fstrim -v /
Host Configuration: Initially, I was concerned about the autotrim setting on my ZFS host due to potential performance issues with synchronous TRIM operations. However, the combination of the discard option in the VMs and the scheduled fstrim on both host and guests seems to handle the TRIM operations effectively.
It's satisfying to see the setup working harmoniously and efficiently managing the SSD space!
Last edited: