The size you're allowed to enter is just a rough check for sanity. Some storages can be "overprovisioned" or "over-commited" (all subvolumes/vdevs/images max-size added together would extend the physical storage hosting capability).
They do not allocate the full image in advance, for them your set size is just a number to know at which point this "virtual disk" should be marked as full. It just writes data as it comes, if you delete data and send FSTRIM commands it will also free those blocks up again.
This can be useful if you want to eventually grow the storage capacity once needed, but do not constantly want to resize all guest disks.
Now, about what happens if the capacity of the underlying storage is depleted it depends on the storage technology used. ZFS will report write errors to the guests if they continue to write, once you increase the capacity or migrate to a bigger storage all would then continue to work again.
LVM-Thin is a bit more problematic if full, so it's recommended to watch for this case and only over commit in trusted environments, see
https://pve.proxmox.com/wiki/LVM2#Thin_Overprovisioning
You also can use overprovisioning on file based storage which do not support it them self, by using qcow2 disks.