Holly shit !!! 275G -> 449G
root@prox01:~# zfs get volsize,refreservation,used vm/vm-108-disk-0
NAME PROPERTY VALUE SOURCE
vm/vm-108-disk-0 volsize 275G local
vm/vm-108-disk-0 refreservation 449G local
vm/vm-108-disk-0 used 449G -
Yes, so 163% size in practice and 171% in theory.
In case you are running DBs or you got similar workloads that do a lot of small IO I would highly recommend to create a striped mirror (raid10). 8x 1TB disks in a striped mirror would give you 4 times the IOPS performance, you could use a 16K volblocksize, easier to add more storage when needed, better reliability, resilvering time would be way lower and there is no padding overhead with zvols, so the full 4TB are really usable.
Padding overhead, by the way, also only effects zvols and not datasets, so LXCs could use the full 7TB.
And if you still want to use a raidz1, the easiest way to change the volblocksize would be:
1.) in PVE webUI go to Datacenter -> Storage -> select your ZFS storage -> Edit -> set something like "32K" as your "Block size"
2.) stop and backup a VM
3.) verify the backup
4.) restore that VM from backup overwriting the existing VM
5.) repeat step 2 to 4 until all VMs are replaced.