likely your volume is thick-provisioned (has a refreservation set - you can check with
zfs get all vmstore/vm-100-disk-0
).
in this case, creating a snapshot requires at least as much free space as the volume currently uses:
- a thick provisioned volume has it's full size reserved (to make sure you can always write the full volume)
- creating a snapshot means the currently referenced data lives as long as that snapshot exists
- to ensure you can still fully (over)write the volume, the total amount of space reserved needs to grow by the data referenced by the snapshot (== currently used data)
if there isn't enough free space, creating the snapshot will fail. the logic gets a bit more involved when you add multiple snapshots (you then only need space for the blocks changed since the last snapshots), and for some pool setups there can be more overhead (raidz parity, etc.pp.).
so yeah, you either need more free space, or set the volume to be thin-provisioned (no reserved space, only actual usage is accounted).
for existing volumes, you can do that by setting the refreservation to 0. for future volumes, you can configure the storage to be thin-provisioned and PVE will not set a refreservation. the
downside is that you can
run out of space by writing inside the VM (since ZFS no longer ensures the space is there at volume/snapshot creation time), which can cause
data loss or undefined behaviour (if you think of it from the guest's POV, this is like putting a lying disk or one of those fake USB drives into your server - it says it has 2TB of space, but after you've written 1TB it starts spewing errors) - so you need to carefully monitor your usage and trust your guests.
here's what the ZFS docs (
man zfsprops
) say about refreservation / thin/sparse volumes
Code:
refreservation=size|none|auto
The minimum amount of space guaranteed to a dataset, not including its descendents. When the amount of space used is below this value, the dataset is treated as if it were taking up the amount of space specified by refreservation. The refreservation reservation is accounted for in the parent datasets' space used, and counts against the parent datasets' quotas and reservations.
If refreservation is set, a snapshot is only allowed if there is enough free pool space outside of this reservation to accommodate the current number of "referenced" bytes in the dataset.
If refreservation is set to auto, a volume is thick provisioned (or "not sparse"). refreservation=auto is only supported on vol‐ umes. See volsize in the Native Properties section for more information about sparse volumes.
....
volsize=size
...
The reservation is kept equal to the volume's logical size to prevent unexpected behavior for consumers. Without the reservation, the volume could run out of space, resulting in undefined behavior or data corruption, depending on how the volume is used. These effects can also occur when the volume size is changed while it is in use (particularly when shrinking the size). Extreme care should be used when adjusting the volume size.
Though not recommended, a "sparse volume" (also known as "thin provisioned") can be created by specifying the -s option to the zfs create -V command, or by changing the value of the refreservation property (or reservation property on pool version 8 or earlier) after the volume has been created. A "sparse volume" is a volume where the value of refreservation is less than the size of the volume plus the space required to store its metadata. Consequently, writes to a sparse volume can fail with ENOSPC when the pool is low on space. For a sparse volume, changes to volsize are not reflected in the refreservation. A volume that is not sparse is said to be "thick provisioned". A sparse volume can become thick provisioned by setting refreservation to auto.