Problem there is that if you have a 100G zvol and you want to create a snapshot, you need another 100G.
The whole idea of snapshots, is that you don't need the same exact amount of storage as you have in your original data, but that the snapshot is a delta. That's how they should work, only with zvols it works differently.
that's bogus, but seems to be a common misunderstanding.
if you have a fully-reserved zvol, the zvol itself will take up the full reservation (100G in your case). if you now create a snapshot, the snapshot will reference what is currently stored inside the dataset (let's call that X), and the total usage is now 100G + X. the usage is displayed confusingly if you don't understand what is going on the layer below, but it would only take up an additional 100G if the zvol was completely full of data that no previous snapshot is referencing.
Code:
root@nora:~# zfs create -V 100G fastzfs/testvol
root@nora:~# zfs list -t all -r -o space fastzfs/testvol
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
fastzfs/testvol 198G 103G 0B 56K 103G 0B
103G used, all by the reservation (zvol is empty)
Code:
root@nora:~# zfs snapshot fastzfs/testvol@snapshot
root@nora:~# zfs list -t all -r -o space fastzfs/testvol
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
fastzfs/testvol 198G 103G 0B 56K 103G 0B
fastzfs/testvol@snapshot - 0B - - - -
snapshot does not reference any data, zvol unchanged
Code:
root@nora:~# dd if=/dev/urandom of=/dev/zvol/fastzfs/testvol bs=1M count=51200 status=progress
53547630592 bytes (54 GB, 50 GiB) copied, 224 s, 239 MB/s
51200+0 records in
51200+0 records out
53687091200 bytes (54 GB, 50 GiB) copied, 236.354 s, 227 MB/s
root@nora:~# zfs list -t all -r -o space fastzfs/testvol
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
fastzfs/testvol 148G 103G 56K 50.5G 52.6G 0B
fastzfs/testvol@snapshot - 56K - - - -
wrote 50G, now we have 50G used + the reset reserved, since that data is only referenced by the zvol itself
Code:
root@nora:~# zfs snapshot fastzfs/testvol@snapshot2
root@nora:~# zfs list -t all -r -o space fastzfs/testvol
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
fastzfs/testvol 148G 154G 56K 50.5G 103G 0B
fastzfs/testvol@snapshot - 56K - - - -
fastzfs/testvol@snapshot2 - 0B - - - -
this is the confusing display - the 50G are now referenced by a snapshot and the zvol itself. they are accounted at the zvol level as long as they are still referenced there. once they are no longer referenced in the zvol, they will be accounted to the snapshot. the reservation is now again 103G, since that is the amount that we are still allowed to write to the zvol (all the current data is referenced in the snapshot, so it does not count).
effectively at this point, the used total data for the zvol+snapshots is the previous usage of the zvol + the full size of the zvol, compared to the full size of the zvol before. so creating a snapshot added the amount of currently referenced data to the total usage.
Code:
root@nora:~# dd if=/dev/urandom of=/dev/zvol/fastzfs/testvol bs=1M count=10240 status=progress
10502537216 bytes (11 GB, 9.8 GiB) copied, 43 s, 244 MB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 81.5687 s, 132 MB/s
root@nora:~# zfs list -t all -r -o space fastzfs/testvol
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
fastzfs/testvol 138G 154G 10.1G 50.5G 93.0G 0B
fastzfs/testvol@snapshot - 56K - - - -
fastzfs/testvol@snapshot2 - 10.1G - - - -
overwriting 10G of the old data with new random bytes, we can now see that the old content is accounted to snapshot2 (since it is still referenced there) and the reservation of the zvol is diminished by 10G. effectively this means writing to the zvol (no matter how much) does not change the total usage at this point.
Code:
root@nora:~# zfs snapshot fastzfs/testvol@snapshot3
root@nora:~# zfs list -t all -r -o space fastzfs/testvol
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
fastzfs/testvol 138G 164G 10.1G 50.5G 103G 0B
fastzfs/testvol@snapshot - 56K - - - -
fastzfs/testvol@snapshot2 - 10.1G - - - -
fastzfs/testvol@snapshot3 - 0B - - - -
creating another snapshots again bumps the reserveration to the full size - the 10G written between snapshot2 and snapshot3 are still displayed at the zvol level, since they are referenced there AND in snapshot3. once they are no longer referenced by the zvol, they will be accounted to snapshot3, just like the 10G in snapshot2.
you can do the same with a sparse/thin-provisioned/unreserved zvol, but keep in mind that as with all thin-provisioning, this allows over-committing your storage, if sufficiently many zvols get full enough, none of them can write anymore at all (in fact, nothing is guaranteed in the VM case, it's akin to a very broken disk to the VM).
setting the refreservation to none (i.e., making the zvol sparse):
Code:
root@nora:~# zfs set refreservation=none fastzfs/testvol
root@nora:~# zfs list -t all -r -o space fastzfs/testvol
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
fastzfs/testvol 138G 60.6G 10.1G 50.5G 0B 0B
fastzfs/testvol@snapshot - 56K - - - -
fastzfs/testvol@snapshot2 - 10.1G - - - -
fastzfs/testvol@snapshot3 - 0B - - - -
drops the total usage by the refreservation. but this also means that while the zvol is 100G big, nothing ensures that we can actually write (fresh) 100G of data to it.