Copy to ZFS, fstrim

justme

New Member
Oct 29, 2016
11
0
1
41
Hi all,

I have two issues:

1. When I do a command like dd if=rawfile | pv | dd of=/dev/zfs-pool/vm-disks/vm-109-disk-1 to import 96 GB large disk to Proxmox (the disk is created with 96 GB size), the occupied ZFS space increases for about 188 GB. Is this normal behaviour? I have tried on two different Proxmox hosts with ZFS and the result is the same.

2. I am trying to enable fstrim in linux ubuntu guests. I have changed the disk of the guest to virtio-scsi, added the virtio and virtio_scsi options to the /etc/initramfs-tools/modules and did update-initramfs -u and then rebooted guest. When I run fstrim -v / on guest, it says that it has trimmed 120 Gbytes of data, but the usage on the thin ZFS pool doesn't decrease at all. What am I doing wrong?

I have spent quite some time on those two issues and can't seem to move furher on.

Thank you very much in advance for ideas and suggestions :)

Mat
 
Last edited:
Is your zvol thin provisioned? If your zvol is not thin provisioned running fstrim inside the guest will not free up space in the zpool.

Also try adding this to the write part: dd conv=sparse of=/dev/zfs-pool/vm-disks/vm-109-disk-1
"sparse try to seek rather than write the output for NUL input blocks"
 
I suppose it is, I selected thin provision when adding it to the Proxmox. This is what is set:
zfsthin.png


For example, there is a VM with disk size 96 GB (Bootdisk size in Proxmox), but when doing zfs list, I get
NAME USED AVAIL REFER MOUNTPOINT
zfs-pool 363G 252G 33.4G /zfs-pool
...
zfs-pool/vm-disks/vm-114-disk-1 169G 252G 169G -
...

This is almost 2 times size of drive in guest ...

Storage that was created from zero in the Proxmox is sparse, but I have thos issues with those disks that have been copied from raw to Proxmox via dd command above in the first post.
I have seen disk "multiple-size" disk usage issue also on PVE with a LVM storage. For example some 296 GB large VM disk uses almost 600 GB, so I assume it's not linked to ZFS in particular.

This issue with such a high disk usage currently "hurts" me the most ...

Thanks.
 
Last edited:
Problem with high disk usage solved by changing the ZFS block size to 16k (from 8k default) by setting the "blocksize 16k" in /etc/pve/storage.cfg. This should be in some FAQ or at least some notice about blocksize in case raidzx is created should be displayed ...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!