Thin Provision after the fact

bearded-donkey

New Member
Dec 28, 2022
10
1
1
New Zealand
Hi,

I've searched the forums but I couldn't find an answer to my specific question.

I have a zfs mirror setup without ticking thin provision, and added a pile of VM disks in there. I've copied all my data to these disks. I've now realised that I would have liked to tick Thin Provision on this.

I do not have space to move these to another pool and back to redo them. These existing disks have quite a bit of free space in them though.

My question is, if I shrunk all these partitions/disks to only have 1GB free space, and then I tick the thin provision box on the zfs pool, and THEN grow the disks again in the future, will that growth then be thin provisioned, or will that space that I'm growing the disks by again be fully provisioned...?

I'm hoping that this would be a solution to my problem because I don't have any manoeuvring room to redo this setup.

These are the disks on the VM itself, each of which I'd have loved to have them thinly provisioned.

Code:
/dev/sdb1       1.7T  1.4T  227G  86% /mnt/data-media/youtube
/dev/sdc1       7.8T  6.3T  1.5T  82% /mnt/data-media/tvshows
/dev/sdd1       836G  826G  1.7G 100% /mnt/data-media/sport
/dev/sde1        20G   17G  2.5G  88% /mnt/data-media/youtube-kids
/dev/sdf1       319G  299G   18G  95% /mnt/data-media/tvshows-kids
/dev/sdg1       241G  221G   17G  93% /mnt/data-media/movies-kids
/dev/sdh1       2.9T  2.2T  757G  75% /mnt/data-media/movies
/dev/sdk1       354G   47G  289G  14% /mnt/data-media/music

Thanks!
 
What checking the "thin" checkbox does is setting the refreservation of the datasets/zvols. You could try a zfs set refreservation=none yourpool/yourZvol.
 
What checking the "thin" checkbox does is setting the refreservation of the datasets/zvols. You could try a zfs set refreservation=none yourpool/yourZvol.
I think that's the same as ticking the thing provisioning checkbox in the GUI. What I'm curious is to know how I can force existing disks to be thinly provisioned from here onwards, possibly by shrinking it, and then having it thinly provisioned when more space is added later.

What I've also read about is setting the discard flag on the disk in the VM hardware. that may also do what I'm after but I'm yet to test that part.
 
Not only the discard flag. When using thin-provisioning you need a full discard chain from the filesystem of every guest OS, over virtio, over the PVE node, over the physical disk controller down to the physical disks. So just setting the "discard" checkbox won't be enough. Make sure to also use a protocol like "Virtio SCSI single" and that all filesystems in the guest OSs are mounted with something like a "discard" option or that something like a daily service is running that will run a fstrim -a.
 
Last edited:
Not only the discard flag. When using thin-provisioning you need a full discard chain from the filesystem of every guest OS, over virtio, over the PVE node, over the physical disk controller down to thephysical disks. So just setting the "discard" checkbox won't be enough. Make sure to also use a protocol like "Virtio SCSI single" and that all filesystems in the guest OSs are mounted with something like a "discard" option or that something like a daily service is running that will run a fstrim -a.
Yup, understood, I will work through that thanks!
 
I've worked on this a bit last night and have some findings:

I ran:

Bash:
umount /mnt/data-icloudpd

e2fsck -f /dev/sdl1

resize2fs -M /dev/sdl1

# Get some more details on the partition
dumpe2fs -h /dev/sdl1

# Resize to the lowest amount of blocks I could:
resize2fs /dev/sdl1 571175

# Calculate the start and end sectors of the partition, used guide: https://serverfault.com/a/1024871/107916

# Resize the existing partition to have the correct sector lengths.  Verify that the start and end sectors are correct based on above calculation
cfdisk /dev/sdl

This all worked as it should, let's jump onto the host now. I need to resize the RAW file in the zfs pool:

Bash:
zfs list

# Disk I'm looking for: zfs3/vm-100-disk-0

Now I'm trying to size this down to the minimum size I can.

I calculated the volsize, in multiples of 8192 since that is the block size on my zfs pool.

Size in bytes is 571175 * 4096 blocks per findings higher up above. My calculations:

571175 4096 *p
2339532800 bytes
8192 /p
285587.5 blocks (8192).
So 285588 blocks.

So my vol size in bytes on the host (8192 blocks) should be:
285588 8192 *p
2339536896

I then ran:

Bash:
zfs set volsize=2339536896 zfs3/vm-100-disk-0

I then went back to my VM, viewed cfdisk, and had only 75MB free after my partition. I mounted the disk, and all the data seems to be in tact.


Does anyone have any feedback on this method, or is there another way to do this? Am I doing this correct, or was I just lucky with my calculations for this particular partition?


ps. I have checked and I have the discard option set on the VM disk, and I have fstrim scheduled on both the host and VMs.


Thanks,
 
Thats not how thin-partitioning works. First that is no "RAW file" it is a zvol, so a block device. Similar to what a LV on LVM would be.
Thin-partitioned zvols won't grow when needed. You create them as big as you think you need them with enough space for future data planned in. The guests filesystem then should also use the whole space of that zvol. Without refreservation set, the zvol won't reservate any space and only consume the space the data (+padding, parity and metadata) actually needs. When setting a refreservation it will still only consume the space the data needs but will reservate all the remaining space so other datasets, zvols or snapshots can't use it.
No need to shrink zvols, partitions or filesystems.

Read this to better understand how reservation and refreservation works:
http://nex7.blogspot.com/2013/03/reservation-ref-reservation-explanation.html
 
Last edited:
Hi @Dunuin,

That is a great article. Thanks a lot of sharing that.

I do think I have it somewhat nailed now.

I did not quite understand the first answer you gave about the zvol, because I wasn't quite up to speed with how that worked. The article helped explain that.

I have now done
Code:
zfs set refreservation=none yourpool/yourZvol
on one of my zvols, and deleted a file on the VM, and done a
Code:
fstrim -v /mnt/mymount
and it worked perfectly, usage on the host dropped instantly.

Thanks for your help! Sorting out my disk provisioning is a lot simpler than I thought. No need to mess around with shrinking anything, calculating block/sector sizes and so on. Phew!

Cheers,
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!