Mount Point Disk Size

5mart3ch

Active Member
Feb 22, 2018
20
0
41
25
I am looking for guidance on how to use "Mount Point: Disk size (GiB)" when I add a mount point. My zfs pool is approx. 26TB with some datasets defined. When I specify the Disk size (GiB), it allows me to enter a maximum of 131072 (or 140TB). How does these two sizes relate? What happens when I specify a larger or smaller size than my actual storage space?
 

Attachments

  • mount-point.png
    mount-point.png
    11.2 KB · Views: 71
The size you're allowed to enter is just a rough check for sanity. Some storages can be "overprovisioned" or "over-commited" (all subvolumes/vdevs/images max-size added together would extend the physical storage hosting capability).
They do not allocate the full image in advance, for them your set size is just a number to know at which point this "virtual disk" should be marked as full. It just writes data as it comes, if you delete data and send FSTRIM commands it will also free those blocks up again.
This can be useful if you want to eventually grow the storage capacity once needed, but do not constantly want to resize all guest disks.

Now, about what happens if the capacity of the underlying storage is depleted it depends on the storage technology used. ZFS will report write errors to the guests if they continue to write, once you increase the capacity or migrate to a bigger storage all would then continue to work again.
LVM-Thin is a bit more problematic if full, so it's recommended to watch for this case and only over commit in trusted environments, see https://pve.proxmox.com/wiki/LVM2#Thin_Overprovisioning

You also can use overprovisioning on file based storage which do not support it them self, by using qcow2 disks.
 
Thanks for the explanation. How do I send the FSTRIM command? I just googled "debian fstrim" and it seems it for SSD optimization. My drives are HDD not SSDs.
 
How do I send the FSTRIM command? I just googled "debian fstrim" and it seems it for SSD optimization. My drives are HDD not SSDs.

Yes exactly, this is to tell SSDs about which blocks are no longer in use and normally does not works with spiining disks. But, if you use ZFS/LVM(Thin) and therelike you can configure a VM disk to pass discard commands (discard is TRIM in Linux terminology) through to the backing store. And even if it's not a SSD LVM or ZFS then can know which blocks are not used anymore by your VM and free them. This is crucial, especially for over-provisioning. Often, modern OS/Distributions have an automatic timer which send TRIM/discard commands about once a week.
On Linux based Distributions you can trigger TRIM for all disk by executing the following as root:
Code:
fstrim -av
(TRIM all disk, and be verbose)

You could test this out by enabling discard for a VM disk you placed on your ZFS, then checking the ZFS space usage first (e.g., zfs list), after that create a big file inside the VM, e.g. with:
Code:
# dd if=/dev/urandom of=/TESTFILE bs=1M count=4096
Then recheck the space usage, it should be about 4GB less than previously (if no other VM did big writes). After you delete the file again by "rm /TESTFILE" in the VM and recheck the space usage, you'll see that it did not changed much, only after a fstrim -av command it should have shrink-ed again.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!